Customise Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorised as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyse the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customised advertisements based on the pages you visited previously and to analyse the effectiveness of the ad campaigns.

No cookies to display.

Forensic Tool Revives AI “Brains” to Diagnose Failures and Uncover Issues

The Rise of AI and the Need for Forensic Investigation

From drones delivering medical supplies to digital assistants managing our daily tasks, AI-powered systems are increasingly woven into the fabric of everyday life. The creators of these innovations promise transformative benefits, making applications like ChatGPT and Claude seem almost magical to many. However, beneath this veneer of enchantment lies a complex reality: these systems are not infallible. They can—and do—fail to operate as intended.

Understanding AI Failures

AI systems can malfunction for various reasons, including technical design flaws, biased training data, and vulnerabilities in their code that can be exploited by malicious hackers. When an AI system fails, isolating the cause is crucial for rectifying the issue. Yet, the challenge lies in the inherent opacity of AI systems, which often obscures the very mechanisms that led to their failure.

Investigating AI failures is not straightforward. While there are techniques for inspecting these systems, they typically require access to internal data, which is not always available—especially to forensic investigators tasked with uncovering the reasons behind a proprietary AI system’s malfunction. This lack of access can render investigations nearly impossible.

The Uncertainty of AI

Consider a scenario where a self-driving car unexpectedly veers off the road and crashes. Initial logs and sensor data may suggest that a faulty camera misinterpreted a road sign, leading the AI to swerve. In the aftermath of such a critical failure, investigators must determine the root cause of the error. Was it a technical malfunction, or could it have been a malicious attack on the AI system?

If investigators identify a vulnerability in the camera’s software, they must ascertain whether it directly contributed to the crash. However, this determination is far from simple. Existing forensic methods can recover some evidence from failures in drones, autonomous vehicles, and other cyber-physical systems, but they often fall short of capturing the comprehensive clues needed to fully investigate the AI involved. Advanced AI systems can continuously update their decision-making processes, complicating the investigation of the most current models.

The Need for AI Forensics

Researchers are actively working to enhance the transparency of AI systems, but until these efforts yield significant results, the demand for forensic tools to understand AI failures remains critical.

Introducing AI Psychiatry

To address these challenges, a team of computer scientists at the Georgia Institute of Technology has developed a system known as AI Psychiatry (AIP). This innovative tool can recreate the scenarios in which an AI failed, enabling investigators to determine what went wrong. AI Psychiatry employs a series of forensic algorithms to isolate the data behind an AI system’s decision-making processes. By reassembling these components, investigators can "reanimate" the AI in a controlled environment and test it against malicious inputs to uncover harmful or hidden behaviors.

AI Psychiatry utilizes a memory image—a snapshot of the AI’s operational state at the time of failure. This memory image holds vital clues about the internal workings and decision-making processes of the AI. With AI Psychiatry, investigators can extract the exact AI model from memory, dissect its components, and load it into a secure environment for thorough testing.

Proven Effectiveness

In preliminary tests, AI Psychiatry was applied to 30 AI models, 24 of which were intentionally "backdoored" to produce incorrect outcomes under specific triggers. The system successfully recovered, rehosted, and tested every model, including those used in real-world applications like street sign recognition in autonomous vehicles. These results suggest that AI Psychiatry can effectively unravel the digital mysteries behind failures, such as an autonomous car crash, that would otherwise leave investigators with more questions than answers.

If AI Psychiatry does not identify a vulnerability within the AI system, it allows investigators to rule out the AI as the cause and explore other potential issues, such as a malfunctioning camera.

Beyond Autonomous Vehicles

The core algorithm of AI Psychiatry is designed to be generic, focusing on the universal components that all AI models require for decision-making. This adaptability makes the approach extendable to any AI model utilizing popular development frameworks. Whether the AI in question is a recommendation bot or a system managing autonomous drone fleets, AI Psychiatry can recover and rehost the AI for analysis.

Moreover, AI Psychiatry is entirely open-source, making it accessible for any investigator seeking to assess a model without needing prior knowledge of its specific architecture.

A Tool for Proactive Audits

AI Psychiatry can also serve as a valuable resource for conducting audits on AI systems before issues arise. As government agencies—from law enforcement to child protective services—integrate AI systems into their workflows, AI audits are becoming increasingly common oversight requirements at the state level. With a tool like AI Psychiatry, auditors can apply a consistent forensic methodology across diverse AI platforms and deployments.

In the long run, this proactive approach will yield meaningful benefits for both the creators of AI systems and the individuals affected by their operations.

Conclusion

As AI continues to permeate our daily lives, understanding and addressing its failures becomes increasingly vital. Tools like AI Psychiatry not only enhance our ability to investigate AI malfunctions but also pave the way for more robust and reliable AI systems in the future. By fostering transparency and accountability, we can harness the full potential of AI while mitigating the risks associated with its deployment.

Get in Touch

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related Articles

Get in Touch

0FansLike
0FollowersFollow
0SubscribersSubscribe

Latest Posts