Aligned AI

Aligned AI is a benefit corporation dedicated to solving the alignment problem – for all types of algorithms and AIs, from simple recommender systems to hypothetical superintelligences.

The fruits of this research will then be available to companies building AI, to ensure that their algorithms serve the best interests of their users and themselves, and do not cause them legal, reputational, or ethical problems.

What is Alignment?

Algorithms are shaping the present and will shape the future ever more strongly. It is crucially important that these powerful algorithms be aligned – that they act in the interests of their designers, their users, and humanity as a whole. Failure to align them could lead to catastrophic results.

Our long experience in the field of AI safety has identified the key bottleneck for solving alignment: Concept extrapolation.

Concept Extrapolation

Algorithms typically fail when they are confronted with new situations – they go out of distribution. Their training data will never be enough to deal with all unexpected situations – thus an AI will need to safely extend key concepts and goals, similarly – or better – to how humans do it.

This is concept extrapolation, explained in more details in this sequence. Solving the concept extrapolation problem is both necessary and almost sufficient for solving the whole AI alignment problem.

Research update

Happy faces benchmark

The aim of this benchmark is to encourage the design of classifiers that are capable of using multiple different features to classify the same image. The features themselves must be deduced by the classifiers without being specifically labeled, though they may use a large unlabeled dataset on which the features vary. We have constructed a benchmark where the features are very different: facial expressions versus written text.

Read more

Building Aligned AI

Stuart Armstrong, Aligned AI’s chief research officer, talks to the London Futurists about the power of AI, the challenge of alignment, and how to ensure our future is full of human flourishing.

Read more

Team

Rebecca Gorman
Rebecca grew up as a tech hobbyist in Silicon Valley and as a technologist has pursued a lifelong dedication to finding ways of making technology serve users’ true values. While working as a real estate agent in the Valley, she continued to engage with start-ups and pursue AI research. She spent the pandemic developing her alignment research ideas into papers with Dr Stuart Armstrong (then of the Future of Humanity Institute at the University of Oxford) and other researchers and getting Aligned AI ready for launch.

Co-Founder and CEO

Dr Stuart Armstrong
Previously a Researcher at the University of Oxford’s Future of Humanity Institute, Stuart is a mathematician and philosopher and the originator of the value extrapolation approach to artificial intelligence alignment. He has extensive expertise in AI alignment research, having pioneered such ideas as interruptibility, low-impact AIs, counterfactual Oracle AIs, the difficulty/impossibility of AIs learning human preferences without assumptions, and how to nevertheless learn these preferences. Along with journal and conference publications, he posts his research extensively on the Alignment Forum.

Co-Founder and Chief Research Officer

Dr Adam Bell
Adam Bell holds a D.Phil. in Biochemistry from Oxford University and a J.D. from the University of California. He worked at the Viral and Rickettsial Disease Laboratory (VRDL) in California; was patent counsel and interim board secretary for AcelRx, Inc, patent counsel for Durect, Inc., patent attorney for Incyte Genomics. Presently Adamserves on the board of Scottish Bioenergy Ltd., IPLEGALED, Inc.,PatentPathway.com and WABESO Enhanced Enzymatics, Inc. When not working you will find Adam attempting to ski, climb and flyhelicopters.

IP Counsel

Patrick Leask
After starting his career working as a back end engineer in start ups including Thought Machine & MachineMax, Patrick became interested in AI Alignment through the safety fundamentals course and has since completed a masters in theoretical computer science with a dissertation on deep learning interpretability tools.

Technical Alignment Research Intern

Advisors

Dylan Hadfield-Menell
Assistant professor at MIT in Artificial Intelligence, Co-Founder and Chief Scientist of Preamble, Expert in Cooperative Inverse Reinforcement Learning

Research Advisor

Adam Gleave
Adam Gleave is an artificial intelligence PhD candidate at UC Berkeley working with the Center for Human-Compatible AI. His research focuses on adversarial robustness and reward learning, and his work on adversarial policies was featured in the MIT Technology Review and other media outlets.

Research Advisor

Justin Shovelain
Co-founder of Convergence, AI safety advisor to Causal Labs and Lionheart Ventures, AI safety advisor to Causal Labs and Lionheart Ventures

Ethics and Safety Advisor

Dr Anders Sandberg
Fellow, Ethics and Values at Reuben College, Oxford, Senior Researcher, Future of Humanity Institute, Oxford

Information Hazards Policy Advisor

Romesh Ranawana
Serial entrepreneur, AI technologist, programmer and software architect with more than 20 years of deep tech development experience, and a highly experienced technology chief executive. Member of the Board of Management of the University of Colombo School of Computing and founding chairman of the SLASSCOM AI Center of Excellence (AICx). Co-founder of SimCentric Technologies and Co-Founder and CTO of Tengri UAV.

Commercialisation Advisor

Charles Pattison
Charles has 15 years experience working in capital markets, from pricing derivatives to investment in listed or unlisted equities. He currently works at a large Asia-based equity-focused fund.

Finance Advisor