Andrew Trask

Andrew Trask

Privacy-Preserving AI Researcher

Now
OM

I am founder at OpenMined, an open-source community of 12,000+ researchers building tools for privacy-preserving AI. We created PySyft for federated learning and run free courses on privacy tech.

2017
Now
DM

I am a Senior Research Scientist at Google DeepMind, studying privacy and AI—federated learning, differential privacy, and secure multi-party computation.

2019
Ox

I completed my PhD at the University of Oxford, focusing on privacy-preserving machine learning. Affiliated with FHI and GovAI.

2020 - 2023
2019
GDL

I wrote Grokking Deep Learning (Manning), teaching neural networks from scratch with just Python and NumPy. 10,000+ copies sold. I also teach in Udacity's Deep Learning Nanodegree (12K+ students).

2014 - 2017
DR

I was a researcher at Digital Reasoning, where I trained one of the world's largest neural networks (160B+ parameters) and helped guide the analytics roadmap for the Synthesys cognitive computing platform.

About This Thesis

This thesis, Attribution-Based Control in AI Systems, synthesizes recent breakthroughs across multiple fields to propose a new paradigm for AI development. It argues that many of AI's most pressing problems—hallucination, privacy, copyright, concentration of power, and value alignment—share a common technical root: the absence of attribution-based control.

The thesis surveys existing techniques that could address this absence:

The Vision

The thesis proposes that AI can be transformed from a tool of central intelligence into a communication technology for broad listening—enabling each person to synthesize information from billions of sources, weighted through local trust relationships, and verified at each hop.

This transformation could unlock 6+ orders of magnitude more data and compute for AI systems while aligning AI development with democratic values.