Press release

AI Alignment Lab Achieves Major Milestone in Step Towards Agentic AI

0
Sponsored by Businesswire

Aligned AI, a leader in artificial intelligence (AI) research, has announced a groundbreaking AI advancement in misgeneralization, a critical challenge in the field of AI. It is the first to surpass a key benchmark called CoinRun by teaching an AI to “think” in human-like concepts. The technology underpinning the achievement opens the door to more precise, reliable, and controllable AI for a wide variety of real world applications.

By teaching AI models to generalize in a manner more akin to agentic human cognition, Aligned AI’s innovation enables AI to correctly identify concepts across new situations and environments, reducing the need for prolonged production, testing, and retraining.

Misgeneralization occurs when AI systems learn incorrect patterns and behaviors from their training data, and are not able to correctly adapt when presented with new information. This leads to unexpected, and often harmful, outcomes. Today’s foundation models suffer from varying degrees of misgeneralization, as evidenced by users’ ability to “jailbreak” them, or there is a trade off between functionality and undesired behavior. The challenge of misgeneralization also prevents the industry as a whole from moving forward. For instance, generalization is required for truly autonomous vehicles and applying AI to critical applications. Otherwise, AIs cannot operate well enough in unfamiliar environments or discern the correct goals without human intervention.

To achieve this milestone, Aligned AI used the 2021 CoinRun misgeneralization benchmark, an Atari-style game released by researchers at Google DeepMind, the University of Cambridge, the University of Tubingen, and the University of Edinburgh. The goal of the benchmark is to test whether an AI can deduce a complex goal when that goal is spuriously correlated with a simpler goal in its training environment. The AI is rewarded for getting a coin, which is always placed at the end of the level during the training period, but is placed in a random location during the testing period, without additional reward information being provided.

Prior to Aligned AI’s innovation, AIs trained on CoinRun believed the best way to play the game was to go to the right, while avoiding monsters and holes. Because the coin was always at the end of the level during training, this strategy seemed effective. When the AI encountered a new level where the coin was placed elsewhere in the level but without being given new information, it would ignore the coin and either miss it or get it only by accident. ACE (which stands for “Algorithm for Concept Extrapolation”), the new AI developed by Aligned AI, notices the changes in the test environment and figures out to go for the coin, even without new reward information – just as a human would.

The key benefits of this breakthrough include:

  • Enhanced Safety: By reducing misgeneralization, AI systems become more reliable, ensuring they operate safely in a wide range of scenarios, from autonomous vehicles to robotics.

  • Improved Capabilities: It enables AI to better understand human intentions and make decisions that align with those intentions, significantly boosting its capabilities.

  • Ethical AI: It enhances the ethical aspects of AI by promoting fairness, transparency, and non-discrimination. AI systems that are precise, reliable, and interpretable are more likely to make ethical decisions by avoiding bias and aligning with human values.

  • Industry Impact: It’s poised to transform industries such as robotics, autonomous vehicles, and foundation models, making them more practical and applicable in various real-world settings.

“This isn’t just a game-changer for the world of AI, it’s a seismic shift for countless industries,” said Rebecca Gorman, Co-Founder and CEO of Aligned AI. “By significantly reducing misgeneralization and enhancing AI’s ability to understand and adapt to unforeseen scenarios, we’re opening doors to unparalleled opportunities across the board. From autonomous vehicles that can navigate from San Francisco to Phoenix on streets it’s never seen before, to robots that can operate effectively in a range of changing and unforeseen environments, this benchmark is the linchpin that will make these futuristic visions a reality. It’s not just about improving AI; it’s about revolutionizing how industries operate, innovate, and serve humanity.”

Aligned AI’s innovation addresses a critical problem facing all AI systems. When confronted with new environments, current AIs tend to incorrectly extend the training data. This is why 70% of models don’t make it into production or face prolonged production and testing time, hindering scalability and often requiring retraining within the first year of release.

“As AI increases in power and widespread use, generalization remains a challenge,” said John Sviokla, a pioneering researcher in AI and current co-founder of GAI Insights, an advisory firm that helps companies achieve ROI with generative AI. “Aligned AI’s research is a critical step forward in the safe, ethical, and effective use of AI across industries.”

Since it was founded, Aligned AI has been at the forefront of addressing the critical challenges facing AI development and deployment. In 2022, Aligned AI was the leader in ChatGPT-jailbreak prevention, releasing the first prompt-evaluator as an open-source project. In September 2023, Aligned AI was awarded the CogX prize for the “Best Innovation in Mitigating Algorithm Bias” for EquitAI, an algorithm that constrains LLMs to output gender unbiased text, and faAIr, its algorithm for measuring and ranking gender bias in foundation models. Aligned AI’s previous work on concept extrapolation improves the performance of AI on out-of-distribution datasets and helps models behave safely while waiting for human feedback.

To learn more about Aligned AI and its misgeneralization breakthrough, please visit buildaligned.ai.

About Aligned AI:

Founded in Oxford by Rebecca Gorman and Dr. Stuart Armstrong, Aligned AI is a deep-tech startup that is enabling the next step change in AI by teaching AIs to understand and hold human-like concepts. Its core technology of “concept extrapolation” enables AIs to extend its trainers’ intent beyond its training data, meaning it operates as it should even in new scenarios. Aligned AI believes that safety and capability are not trade-offs, but rather an AI that is more precise and controllable is also more powerful.