Ask a Librarian

Threre are lots of ways to contact a librarian. Choose what works best for you.

HOURS TODAY

10:00 am - 4:00 pm

Reference Desk

CONTACT US BY PHONE

(802) 656-2022

Voice

(802) 503-1703

Text

MAKE AN APPOINTMENT OR EMAIL A QUESTION

Schedule an Appointment

Meet with a librarian or subject specialist for in-depth help.

Email a Librarian

Submit a question for reply by e-mail.

WANT TO TALK TO SOMEONE RIGHT AWAY?

Library Hours for Thursday, November 21st

All of the hours for today can be found below. We look forward to seeing you in the library.
HOURS TODAY
8:00 am - 12:00 am
MAIN LIBRARY

SEE ALL LIBRARY HOURS
WITHIN HOWE LIBRARY

MapsM-Th by appointment, email govdocs@uvm.edu

Media Services8:00 am - 7:00 pm

Reference Desk10:00 am - 4:00 pm

OTHER DEPARTMENTS

Special Collections10:00 am - 6:00 pm

Dana Health Sciences Library7:30 am - 11:00 pm

 

CATQuest

Search the UVM Libraries' collections

UVM Theses and Dissertations

Browse by Department
Format:
Online
Author:
Stevens, Timothy
Dept./Program:
Computer Science
Year:
2022
Degree:
Ph. D.
Abstract:
We present novel techniques to forward the goal of secure and private machine learning. The widespread use of machine learning poses a serious privacy risk to the data used to train models. Data owners are forced to trust that aggregators will keep their data secure, and that released models will maintain their privacy. The works presented in this thesis strive to solve both problems through secure multiparty computation and differential privacy based approaches. The novel FLDP protocol leverages the learning with errors (LWE) problem to mask model updates and implements an efficient secure aggregation protocol, which easily scales to large models. Continuing on the vein of scalable secure aggregation the SHARD protocol utilizes a multi-layered secret sharing scheme to perform efficient secure aggregation on very large federations. Together, these protocols allow a federation to train models without requiring data owners to trust an aggregator. In order to ensure the privacy of trained models, we propose immediate sensitivity, a framework for reducing membership inference attack efficacy against neural networks. Immediate sensitivity uses a differential privacy inspired additive noise mechanism to privatize model updates during training. By determining the scale of the noise through the gradient of the gradient, immediate sensitivity trains more accurate models than differentially private gradient clipping approach. Each of these works is supported by extensive experimental evaluation.