Ask a Librarian

Threre are lots of ways to contact a librarian. Choose what works best for you.

HOURS TODAY

Closed

Reference Desk

CONTACT US BY PHONE

(802) 656-2022

Voice

(802) 503-1703

Text

MAKE AN APPOINTMENT OR EMAIL A QUESTION

Schedule an Appointment

Meet with a librarian or subject specialist for in-depth help.

Email a Librarian

Submit a question for reply by e-mail.

WANT TO TALK TO SOMEONE RIGHT AWAY?

Library Hours for Saturday, November 23rd

All of the hours for today can be found below. We look forward to seeing you in the library.
HOURS TODAY
Closed
MAIN LIBRARY

SEE ALL LIBRARY HOURS
WITHIN HOWE LIBRARY

MapsM-Th by appointment, email govdocs@uvm.edu

Media ServicesClosed

Reference DeskClosed

OTHER DEPARTMENTS

Special CollectionsClosed

Dana Health Sciences Library10:00 am - 6:00 pm

 

CATQuest

Search the UVM Libraries' collections

UVM Theses and Dissertations

Browse by Department
Format:
Online
Author:
Zieba, Karol
Dept./Program:
Computer Science
Year:
2015
Degree:
MS
Abstract:
From the very creation of the term by Czech writer Karel Capek in 1921, a "robot" has been synonymous with an artificial agent possessing a powerful body and cogitating mind. While the fields of Artificial Intelligence (AI) and Robotics have made progress into the creation of such an android, the goal of a cogitating robot remains firmly outside the reach of our technological capabilities. Cognition has proved to be far more complex than early AI practitioners envisioned. Current methods in Machine Learning have achieved remarkable successes in image categorization through the use of deep learning. However, when presented with novel or adversarial input, these methods can fail spectacularly. I postulate that a robot that is free to interact with objects should be capable of reducing spurious difference between objects of the same class. This thesis demonstrates and analyzes a robot that achieves more robust visual categorization when it first evolves to use proprioceptive sensors and is then trained to increasingly rely on vision, when compared to a robot that evolves with only visual sensors. My results suggest that embodied methods can scaffold the eventual achievement of robust visual classification.