Skip to Main Content
ECE Professor Nicolas Papernot delivers a talk on his research in computer security and privacy. He was awarded the 2022 Sloan Research Fellowship for contributions to the field and the potential of his research for future significant impact. (Photo: Caitlin Free)

In the field of artificial intelligence (AI), most researchers are pleased when a computer algorithm accomplishes a task. But for ECE Professor Nicolas Papernot, the real insight comes from tricking them into mistakes.  

“With one model, we managed to demonstrate that under the right conditions, an image classifier would recognize a stop sign as a yield sign,” says Papernot. “It’s easy to imagine a situation where something like this has serious consequences.” 

But Papernot’s goal is not merely to correct these types of errors — he also aims to understand how and why the algorithms are making them. By doing so, he can help resolve security and privacy issues that undermine our trust in the technology. Papernot was awarded a 2022 Sloan Research Fellowship, valued at $75,000, for the potential of his research to encourage more widespread AI adoption. 

The challenge facing Papernot and his team is that the AI approach known as machine learning (ML) works very differently than traditional coding. Non-ML algorithms consist of explicit instructions that specify what the computer should do to arrive at a result. 

By contrast, ML algorithms are presented with a large set of training data, containing inputs and desired outputs. Over many iterations, the algorithms learn how to go from one to the other, but it isn’t always clear what factors they are taking into consideration. In many cases, not even the programmers themselves can identify exactly the routes an algorithm takes to its result. 

Despite these unknowns, machine learning is already assisting with many tasks today, from diagnosing medical conditions and determining insurance rates to short-listing job candidates. However, the public’s inherent reservations are limiting this technology from realizing its true promise.  

“There are many cases where researchers would like to consider advances in machine learning to develop new techniques, such as in health care,” says Papernot. “But they cannot guarantee the data that these algorithms are analyzing will be treated in a privacy-preserving, secure way.” 

In deliberately compromising an ML model, an approach sometimes termed ethical hacking, Papernot exposes flaws that can help the model’s owners to retrain it or force them to conclude it needs to be abandoned. 

Some of the team’s hacks rely on tried-and-true human ingenuity, such as the one that involved a trip to the hardware store. 

“We were testing a voice assistant that was tuned to someone else’s voice,” says Papernot. “We bought a simple hollow tube and calibrated the precise length it needed to be cut to alter our voice and gain access.” 

Papernot’s latest research focuses on developing auditing mechanisms for ML models. Given how many institutions have access to our data — from our employers to social media companies — there is an expectation that the algorithms they use will handle it properly and be fair to different subpopulations. 

But how would one prove they’re not? Even if we suspect laxity or bias, an institution can retrain a model after the fact to conceal its lack of fairness, says Papernot, a practice known as ‘fairwashing.’  

“Our proposal is for organizations to keep a log during the ML model training at various intermediate states. My team is researching exactly what information needs to be recorded in these states so that, with the application of commonplace cryptographic techniques, it would be impossible for someone to alter these records later — say, if they’re being sued.” 

Down the line, he envisions such solutions would require a third-party agency or government body as intermediary, to offer both technological resources and oversight for what would be a complex area of regulation. He is collaborating with Faculty of Law Professor Lisa Austin, cross-appointed with ECE, to work out what this framework might be. 

“The law defines privacy one way, and computer science another way,” says Papernot. “We’re attempting to bridge the gap so legal scholars can present these technologies in a way that compels the law to adapt while computer scientists develop the techniques to enforce these new stipulations.” 

It was cross-disciplinary research opportunities such as this, along with U of T’s strong talent pool in machine learning, that initially drew Papernot to ECE (he himself is cross-appointed with Computer Science). To him, the Sloan Fellowship is recognition of all the students, postdoctoral fellows and collaborators he’s been fortunate to work alongside, and the established ECE professors who take the time to mentor junior faculty. 

“Professor Papernot exemplifies the ECE mindset with forward-looking research and a willingness to learn about and incorporate other fields for impactful solutions,” says ECE Chair Professor Deepa Kundur. “The Sloan Foundation has made an excellent choice for its early-career fellowship.” 

“The award has been very motivating,” says Papernot, “and it gives a measure of validation to my group at U of T — that we’re seen to be working on the right problems.”

Media Contact

Fahad Pinto
Communications & Media Relations Strategist
416.978.4498