Amazon is actively implementing artificial intelligence across its entire business to improve operational efficiency, improve customer satisfaction, and expand market share. But there are risks with AI as probabilistic systems that don’t always behave as expected and are prone to hallucinations. To help minimize the risks associated with artificial intelligence, Amazon and its subsidiary AWS (Amazon Web Services) are turning to a time-tested but little-known technology called automated inference.
Automated reasoning is a field of computer science that aims to provide greater certainty in the behavior of complex systems. In essence, automated reasoning, based on logic and mathematics, provides adopters with strong assurance that the system will accomplish what it was designed to do.
Neha Rungta, director of applied science at AWS, holds a PhD in computer science from Brigham Young University and used automated inference technology while working at NASA's Ames Research Center in Northern California.
"This is about using mathematical logic to prove the correctness of a system and designing the system in architectural code," Rungta said. "Traditionally, these techniques are used in fields such as aerospace, where it is critical to ensure that the system is correct."
Since 2016, Rungta has been using her expertise to help AWS improve the security of its services. Her AWS accomplishments include two products, IAM Access Analyzer and Amazon S3 Block Access. IAM Access Analyzer is used to analyze Amazon IAM (Identity and Access Management) and its 2 billion requests per second.
Rungta said in an interview at re:Invent 2024 last week: "Amazon S3 Block Access is powered by automated inference, which guarantees that if a customer turns it on, their bucket will not be granted unrestricted access to the public, either now or Any time in the future. "Even if AWS changes - because things change, we roll out new features, new products all the time - this bucket will not grant unlimited access."
At re:Invent last Tuesday, AWS announced that it is using Amazon Bedrock for automated inference, a service used to train and run basic models, including large language models (LLM) and image models. The company says the service, called Automated Reasoning Checking, is “the first and only generative artificial intelligence (GenAI) assurance that helps prevent hallucinations using logically accurate and verifiable reasoning. factual error."
While neural networks, like the LLM at the heart of GenAI, are powerful and provide greater predictive power than traditional machine learning techniques, they also tend to be opaque, which limits their usefulness in some areas. By using automated inference models on top of GenAI models, customers can be more confident that the model won't misbehave for mysterious reasons.
Rungta said this is very much a rules-based approach.
“These patterns are very different from what you think of LLM patterns,” she said. The way to think about these models is that they are a set of rules, a set of declarative statements about the authenticity of the system. What is the hypothesis? Given a specific set of inputs, what is the output that you want to ensure remains unchanged? Automated reasoning brings a rules-based approach to ensuring the correct behavior of probabilistic AI systems.
"There are different techniques for creating and analyzing these models," she continued. "Some are based on proving formal theorems. Another is based on satisfiability problems, so in the end it's essentially Boolean logic. And some are based on code analysis techniques. So they're very different from how you think of a large language model or a base model. "
If automated reasoning can provide something like deterministic behavior for probabilistic systems, why aren't they more widely used? After all, the fear of LLMs doing or saying toxic or wrong things is one of the biggest concerns in the current GenAI craze , and hindering many companies from putting GenAI applications into production.
The reason, Rungta said, is that automated reasoning comes at a cost. It’s not so much the computational cost of running an automated inference model, but the cost of developing and testing it. Adopters will need expertise not only in this small branch of the AI field, but also in areas where automated reasoning is applied. That's why so far its use has been limited to the most sensitive areas, where getting the wrong answer could be disastrous.
"There's a lot of work to do, how do you know your rules are correct for a complex system?" "It's not easy. You have to validate. How do you know how your rules interact with the environment?" ?You don’t have the rules for the whole world.”
As some of these LLMs become smaller and better suited to specific domains, the easier and cheaper it becomes to apply automated reasoning techniques to them, Rungta said. To this end, AWS also announced its new Amazon Bedrock model distillation product as well as automated inference checking products. These two technologies are complementary to each other.
As the GenAI era emerges, Amazon is looking to become a leader. Amazon founder Jeff Bezos told The New York Times conference this week that there are more than 1,000 artificial intelligence projects within the company. Business Insider reports that he is spending more time with the company to get some of these AI projects closer to completion.
When we begin the era of artificial intelligence agents, we will see that different artificial intelligence agents (AI Agents) have different jobs. We will likely see some AI agents acting as proxy supervisors for workers, and these areas may develop automated reasoning capabilities.
AWS is a pioneer in using artificial intelligence to automate inference. No other company appears to be using this technology to improve the reliability of AI models and the applications they power. But Rungta is optimistic that the technology has a lot to offer and will ultimately help unlock the vast potential of artificial intelligence.
“I do think generative AI is going to change the way we live,” she said. "The models are improving every week, if not every day. It's a fascinating time."
AI courses are suitable for people who are interested in artificial intelligence technology, including but not limited to students, engineers, data scientists, developers, and professionals in AI technology.
The course content ranges from basic to advanced. Beginners can choose basic courses and gradually go into more complex algorithms and applications.
Learning AI requires a certain mathematical foundation (such as linear algebra, probability theory, calculus, etc.), as well as programming knowledge (Python is the most commonly used programming language).
You will learn the core concepts and technologies in the fields of natural language processing, computer vision, data analysis, and master the use of AI tools and frameworks for practical development.
You can work as a data scientist, machine learning engineer, AI researcher, or apply AI technology to innovate in all walks of life.