Search
Close this search box.

We are creating some awesome events for you. Kindly bear with us.

Understanding Vulnerabilities in AI: NIST’s Insights

Getting your Trinity Audio player ready...

In a recent publication, computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators shed light on the vulnerabilities of artificial intelligence (AI) and machine learning (ML) systems, emphasising the potential for deliberate manipulation or “poisoning” by adversaries. The work, titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2),” is part of NIST’s broader initiative to foster the development of trustworthy AI.

Image credits: nist.gov

The researchers acknowledged that AI systems have become integral to modern society, performing tasks ranging from autonomous driving to medical diagnosis and customer interactions via online chatbots. These systems rely on extensive training datasets to learn and predict outcomes in various situations. However, a critical issue arises from the questionable trustworthiness of the data itself, sourced from websites and public interactions. Bad actors can exploit this vulnerability during the training phase and subsequent interactions, leading AI systems to exhibit undesirable behaviours.

One notable concern highlighted in the publication is the lack of foolproof defence mechanisms to protect AI systems from misdirection. The sheer size of training datasets makes it challenging for developers to effectively monitor and filter content, leaving room for malicious actors to corrupt the data. For instance, chatbots may inadvertently learn to respond with abusive or racist language when exposed to carefully crafted malicious prompts.

The report identified four major types of attacks on AI systems: evasion, poisoning, privacy, and abuse attacks. Evasion attacks occur after deployment and aim to alter inputs to influence system responses. Poisoning attacks, which occur during training, involve introducing corrupted data to the system. Privacy attacks target sensitive information about the AI or its training data during deployment. Abuse attacks entail inserting incorrect information into legitimate sources for the AI to absorb.

The authors classified these attacks based on criteria such as the attacker’s goals, capabilities, and knowledge. They stressed the importance of understanding these attack vectors and their potential consequences, as well as the limitations of existing defences. The report offers an overview of the various attacks AI products might face and proposes corresponding approaches to minimise the damage.

The publication underscored the challenges associated with securing AI systems, especially given the theoretical problems that remain unsolved. Despite the significant progress in AI and ML technologies, the authors caution that these systems are susceptible to attacks with potentially severe consequences. They emphasised the need for heightened awareness among developers and organisations deploying AI technology, noting that existing defences against adversarial attacks are incomplete at best.

Further, based on previous reports from OpenGov, the U.S. has also taken strides to tackle the deep-fake challenge. In response to the escalating threat of deepfakes, the National Security Agency (NSA) and its federal partners, including the Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA), have issued new guidance aimed at addressing cybersecurity risks posed by synthetic media.

Deepfakes, a type of manipulated or artificially created multimedia content using machine learning and deep learning technologies, present significant challenges for National Security Systems (NSS), the Department of Defence (DoD), and organisations within the Defence Industrial Base (DIB).

The collaborative effort has resulted in the publication of a Cybersecurity Information Sheet (CSI) titled “Contextualising Deepfake Threats to Organisations.” The document serves as a comprehensive guide for entities to recognise, safeguard against, and respond to deepfake threats effectively. The term “deepfake” encompasses a broad range of synthetically generated or altered media, including “shallow/cheap fakes,” “generative AI,” and “computer-generated imagery (CGI).”

The joint CSI offers practical recommendations for organisations to counter synthetic media threats, particularly deepfakes. One key suggestion involves the adoption of real-time verification capabilities, enabling swift identification and response to potential instances of fake content. Passive detection techniques are also emphasised for ongoing monitoring and early detection. The guidance stresses the importance of safeguarding high-priority officers and their communications, as they are frequent targets of deepfake attempts.

The NIST publication serves as a comprehensive guide for AI developers and users, offering insights into potential threats and mitigation strategies. It encourages the community to contribute to the development of more robust defences against adversarial attacks on AI systems, recognising that a one-size-fits-all solution does not currently exist in this evolving landscape.

PARTNER

Qlik’s vision is a data-literate world, where everyone can use data and analytics to improve decision-making and solve their most challenging problems. A private company, Qlik offers real-time data integration and analytics solutions, powered by Qlik Cloud, to close the gaps between data, insights and action. By transforming data into Active Intelligence, businesses can drive better decisions, improve revenue and profitability, and optimize customer relationships. Qlik serves more than 38,000 active customers in over 100 countries.

PARTNER

CTC Global Singapore, a premier end-to-end IT solutions provider, is a fully owned subsidiary of ITOCHU Techno-Solutions Corporation (CTC) and ITOCHU Corporation.

Since 1972, CTC has established itself as one of the country’s top IT solutions providers. With 50 years of experience, headed by an experienced management team and staffed by over 200 qualified IT professionals, we support organizations with integrated IT solutions expertise in Autonomous IT, Cyber Security, Digital Transformation, Enterprise Cloud Infrastructure, Workplace Modernization and Professional Services.

Well-known for our strengths in system integration and consultation, CTC Global proves to be the preferred IT outsourcing destination for organizations all over Singapore today.

PARTNER

Planview has one mission: to build the future of connected work. Our solutions enable organizations to connect the business from ideas to impact, empowering companies to accelerate the achievement of what matters most. Planview’s full spectrum of Portfolio Management and Work Management solutions creates an organizational focus on the strategic outcomes that matter and empowers teams to deliver their best work, no matter how they work. The comprehensive Planview platform and enterprise success model enables customers to deliver innovative, competitive products, services, and customer experiences. Headquartered in Austin, Texas, with locations around the world, Planview has more than 1,300 employees supporting 4,500 customers and 2.6 million users worldwide. For more information, visit www.planview.com.

SUPPORTING ORGANISATION

SIRIM is a premier industrial research and technology organisation in Malaysia, wholly-owned by the Minister​ of Finance Incorporated. With over forty years of experience and expertise, SIRIM is mandated as the machinery for research and technology development, and the national champion of quality. SIRIM has always played a major role in the development of the country’s private sector. By tapping into our expertise and knowledge base, we focus on developing new technologies and improvements in the manufacturing, technology and services sectors. We nurture Small Medium Enterprises (SME) growth with solutions for technology penetration and upgrading, making it an ideal technology partner for SMEs.

PARTNER

HashiCorp provides infrastructure automation software for multi-cloud environments, enabling enterprises to unlock a common cloud operating model to provision, secure, connect, and run any application on any infrastructure. HashiCorp tools allow organizations to deliver applications faster by helping enterprises transition from manual processes and ITIL practices to self-service automation and DevOps practices. 

PARTNER

IBM is a leading global hybrid cloud and AI, and business services provider. We help clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 3,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM’s hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM’s breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM’s legendary commitment to trust, transparency, responsibility, inclusivity and service.