Singapore’s commitment to cybersecurity takes a significant leap forward as the Cyber Security Agency of Singapore (CSA) partners with the US Cybersecurity and Infrastructure Security Agency (CISA) to launch the inaugural Singapore-Industrial Control Systems Cybersecurity 301 (SG-ICS301) course. This endeavour coincides with the Operational Technology Cybersecurity Expert Panel (OTCEP) Forum 2023, symbolising a vital milestone in safeguarding critical infrastructures against cyber threats.
The SG-ICS301 course, inspired by CISA’s highly successful ICS301 programme, is geared toward equipping approximately 40 participants hailing from Singapore, ASEAN countries, Bangladesh, and the Maldives with the knowledge and skills necessary to fortify Operational Technology (OT) networks and secure Critical Information Infrastructure (CII) OT systems from the ever-evolving cyber menace.
OT encompasses the interconnection of devices and computers to monitor and manage physical processes, predominantly employed in industrial sectors like energy, manufacturing, transportation, and water management.
In a world where cyber threats are growing increasingly sophisticated, this four-day course stands as a robust response to the pressing need for competent engineers and cybersecurity professionals capable of defending OT systems. By delving into the concepts, strategies, and technologies that underpin the protection of Operational Technology (OT) systems, this course equips participants with the knowledge and skills required to safeguard critical infrastructure and industrial processes.
Additionally, CSA has taken a significant step in bolstering the nation’s cyber defences by signing a three-year Memorandum of Understanding (MoU) with an industrial cybersecurity company. This strategic partnership aims to address the escalating cyber threats targeting Operational Technology (OT) systems and enhance Singapore’s preparedness against potential cyber-attacks.
David Koh, Commissioner of Cybersecurity and Chief Executive of CSA, underscored the significance of the partnership. He highlighted the growing interconnectivity of OT systems with the internet and the potential real-world consequences of cyber-attacks on critical services. This partnership reflects Singapore’s commitment to strengthening its technological capabilities in OT cybersecurity and highlights the importance of collaboration in defending against evolving cyber threats.
Singapore recognises the imperative need to adopt a proactive stance in safeguarding its critical infrastructure. CSA’s collaboration with an industrial cybersecurity company aligns seamlessly with the nation’s overarching OT Cybersecurity Masterplan, which seeks to fortify the resilience of CII sectors that rely on OT technologies. These sectors include energy, manufacturing, transportation, and water, where OT systems are the backbone of operations.
The MoU encompasses several critical areas of cooperation. First is threat intelligence sharing, which involves the exchange of valuable insights and expertise in the realm of threat detection and hunting. This will equip CSA and CII sector leads with the necessary knowledge to identify and mitigate cyber threats promptly.
Also, the partnership entails consultancy and risk assessment activities. These will involve architecture reviews and cyber risk assessments in OT CII sectors, enabling the identification of vulnerabilities and the implementation of proactive security measures.
Incident response capabilities are another focal point of the MoU. It recognises the significance of developing a robust OT cybersecurity incident response framework. By leveraging an industrial cybersecurity platform technology, Singapore aims to enhance its ability to address sophisticated OT cyber-attacks that demand niche or deep expertise.
Besides, the MoU emphasises the importance of information exchange and training. This will provide a platform for sharing ideas, insights, and expertise. Talent development programmes will be established to nurture local cybersecurity talent and align them with industry best practices, fostering a strong and resilient cybersecurity ecosystem in Singapore.
The MoU will extend its benefits to the broader CII sectors. CII owners will gain access to expert knowledge and resources to fortify their cybersecurity posture. Simultaneously, local cybersecurity companies will have the opportunity to collaborate with each other fostering innovation and knowledge transfer within the industry.
Liming Zhu and Qinghua Lu, leaders in the study of responsible AI at CSIRO and Co-authors of Responsible AI: Best Practices for Creating Trustworthy AI Systems delve into the realm of responsible AI through their extensive work and research.
Artificial Intelligence (AI), currently a major focal point, is revolutionising almost all facets of life, presenting entirely novel methods and approaches. The latest trend, Generative AI, has taken the helm, crafting content from cover letters to campaign strategies and conjuring remarkable visuals from scratch.
Global regulators, leaders, researchers and the tech industry grapple with the substantial risks posed by AI. Ethical concerns loom large due to human biases, which, when embedded in AI training, can exacerbate discrimination. Mismanaged data without diverse representation can lead to real harm, evidenced by instances like biased facial recognition and unfair loan assessments. These underscore the need for thorough checks before deploying AI systems to prevent such harmful consequences.
The looming threat of AI-driven misinformation, including deepfakes and deceptive content, concerning for everyone, raising fears of identity impersonation online. The pivotal question remains: How do we harness AI’s potential for positive impact while effectively mitigating its capacity for harm?
Responsible AI involves the conscientious development and application of AI systems to benefit individuals, communities, and society while mitigating potential negative impacts, Liming Zhu and Qinghua Lu advocate.
These principles emphasise eight key areas for ethical AI practices. Firstly, AI should prioritise human, societal, and environmental well-being throughout its lifecycle, exemplified by its use in healthcare or environmental protection. Secondly, AI systems should uphold human-centred values, respecting rights and diversity. However, reconciling different user needs poses challenges. Ensuring fairness is crucial to prevent discrimination, highlighted by critiques of technologies like Amazon’s Facial Recognition.
Moreover, maintaining privacy protection, reliability, and safety is imperative. Instances like Clearview AI’s privacy breaches underscore the importance of safeguarding personal data and conducting pilot studies to prevent unforeseen harms, as witnessed with the chatbot Tay generating offensive content due to vulnerabilities.
Transparency and explainability in AI use are vital, requiring clear disclosure of AI limitations. Contestability enables people to challenge AI outcomes or usage, while accountability demands identification and responsibility from those involved in AI development and deployment. Upholding these principles can encourage ethical and responsible AI behaviour across industries, ensuring human oversight of AI systems.
Identifying problematic AI behaviour can be challenging, especially when AI algorithms drive high-stakes decisions impacting specific individuals. An alarming instance in the U.S. resulted in a longer prison sentence determined by an algorithm, showcasing the dangers of such applications. Qinghua highlighted the issue with “black box” AI systems, where users and affected parties lack insight into and means to challenge decisions made by these algorithms.
Liming emphasised the inherent complexity and autonomy of AI, making it difficult to ensure complete compliance with responsible AI principles before deployment. Therefore, user monitoring of AI becomes crucial. Users must be vigilant and report any violations or discrepancies to the service provider or authorities.
Holding AI service and product providers accountable is essential in shaping a future where AI operates ethically and responsibly. This call for vigilance and action from users is instrumental in creating a safer and more accountable AI landscape.
Australia is committed to the fair and responsible use of technology, especially artificial intelligence. During discussions held on the sidelines of the APEC Economic Leaders Meeting in San Francisco, the Australian Prime Minister unveiled the government’s commitment to responsibly harnessing generative artificial intelligence (AI) within the public sector.
The DTA-facilitated collaboration showcases the Australian Government’s proactive investment in preparing citizens for job landscape changes. Starting a six-month trial from January to June 2024, Australia leads globally in deploying advanced AI services. This initiative enables APS staff to innovate using generative AI, aiming to overhaul government services and meet evolving Australian needs.
Amid the relentless surge of cybersecurity threats, governments and technology agencies must embrace heightened awareness and implement meticulous data protection strategies. The escalating cyber threats necessitate a proactive stance, where staying one step ahead is crucial to safeguarding crucial information assets.
In this dynamic digital landscape, where information is a commodity, governments must acknowledge the evolving nature of cyber threats and continuously fortify their cybersecurity measures. Rapid technological advancements bring new challenges, requiring adaptive and innovative solutions to balance potential vulnerabilities.
Collaboration between government bodies, regulatory agencies, and technology experts is paramount in fostering a collective defence against cyber threats towards data privacy. Sharing insights, intelligence, and best practices creates a robust cybersecurity ecosystem capable of anticipating and mitigating emerging risks.
To secure public information and ensure data privacy, Mr Prasert Chandraruangthong, the Minister of Digital Economy and Society, has initiated measures to combat leaks and the illicit trade of personal information. Recognising the situation’s urgency, the Minister outlined a comprehensive plan divided into three periods—30 days, six months, and 12 months.
During the first 30-day period, the Office of the Personal Data Protection Commission (PDC) established the Personal Data Violation Surveillance Centre to investigate public information disclosures promptly. The operations conducted this November inspected 3,119 government and private sector agencies. The PDPC detected data leaks in 1,158 cases, leading to corrective actions taken by the agencies in 781 instances. Notably, three issues of personal data trading were uncovered, prompting investigations and prosecutions in collaboration with The Police Technology Crime Investigation Headquarters.
Simultaneously, the PDPC, under the directive of the Police Technology Crime Investigation Headquarters, expedited inspections of 9,000 agencies within the next 30 days. This initiative targeted government agencies deemed critical information infrastructure (CII), including those in the energy, public health, government services, finance, and banking sectors.
During the inspections, the cybersecurity systems of 91 agencies were examined. Of these, 21 were identified as having high levels of risk, prompting corrective actions by the National Broadcasting and Telecommunications Commission (NBTC).
The third measure involves collaborative efforts between the National Council for Peace and Order (NCPO), NBTC, and relevant agencies such as the Thai Chamber of Commerce, Federation of Thai Industries, Thai Bankers Association, Thai Life Assurance Association, Thai Hotel Association, and the media sector network. The objective is to raise awareness about personal data protection and prevent potential risks from inadequate security procedures. This includes knowledge-sharing sessions on maintaining cybersecurity through Cybersecurity Awareness Training. The collaborative initiative emphasises preventing intrusion from outsiders, securing system settings, and enforcing the law within the purview of the authorities.
For the subsequent six-month period, the Ministry of Digital Economy and Society and the Department of Special Investigation (DSI) will expedite efforts to block illegal trading of personal information. Offenders will be actively pursued, prosecuted, and arrested to ensure a swift and effective response in safeguarding the privacy and security of individuals’ data.
This strategy underscores the government’s commitment to leveraging digital technology to fortify data protection measures and create a safer online environment for all citizens by partnering with other entities.
OpenGov Asia reported that Thailand is strategically addressing escalating cybersecurity concerns with a multi-faceted approach involving tech, partnerships, specialised task forces, public relations efforts and training programmes to fortify cyber resilience and foster innovation.
The Minister of Digital Economy and Society, Mr Prasert Chandraruangthong, along with Professor Wisit Wisitsaratha, Permanent Secretary of the Ministry of Digital Economy and Society, and ministry executives from affiliated agencies, recently conducted a meeting to review strategies to address cybercrime problems, notably personal data leaks. Thus far, Thailand has generated several ideas concerning cyber threats, particularly in financial cybersecurity. Mr Prasert Chandraruangthong has initiated several steps and frameworks to address these issues:
In a gathering in New Delhi that reflected the significance of the issue, the Secretary of the Department of Financial Services (DFS), under the Ministry of Finance, spearheaded a comprehensive discourse. The focus was on unveiling the challenges and strategising against the burgeoning threats of cybercrime in the financial services sector, particularly the surge in online financial fraud incidents.
Critical issues discussed encompassed the imperative need for enhanced coordination among police, banks, and financial entities for real-time tracking and blocking of defrauded funds. Additionally, strategies to tackle the proliferation of mule accounts, augment response times to handle alerts on online financial frauds, and establish regional/state-level nodal officers were highlighted.
The meeting also emphasised the necessity of a central registry for merchant onboarding and KYC standardisation, as well as the importance of whitelisting digital lending apps through stakeholder consultation. Progress updates on implementing recommendations, such as setting up the Digital India Trust Agency (DIGITA) and the proposed legislation known as the ‘Banning of Unregulated Lending Activities (BULA) Act,’ were also on the agenda.
Lastly, an overarching consensus emerged: all stakeholders, including banks and financial institutions, must prioritise customer awareness and sensitisation programs to bolster digital payment security.
Attendees, including the Secretary of Telecom and high-ranking officials from multiple sectors such as DFS, Department of Economic Affairs (DEA), Department of Revenue (DoR), Ministry of Electronics & Information Technology (MeitY), Department of Telecom (DoT), Reserve Bank of India (RBI), Telecom Regulatory Authority of India (TRAI), Unique Identification Authority of India (UIDAI), Indian Cyber Crime Co-ordination Centre (I4C), National Payments Corporation of India (NPCI), as well as leading banks and financial institutions like State Bank of India (SBI), Bank of Baroda, Canara Bank, and others, converged for this pivotal discussion.
The Indian Cyber Crime Coordination Centre (I4C) from the Ministry of Home Affairs shared a concerning presentation. They highlighted the escalating statistics of digital payment frauds culled from the National Cyber Crime Reporting Portal (NCRP), shedding light on the diverse sources of financial frauds and the intricate modus operandi adopted by cybercriminals. This included exploring the challenges impeding efforts to counter these financial cybercrimes.
The meeting was not just a gathering of minds, but it explored preparedness. Participants assessed the readiness of banks and financial institutions to confront the escalating challenges posed by cyber threats in the financial domain. They delved into the rising trend of digital payment frauds and crafted a focused strategy to combat these attacks and scams head-on.
Moreover, key players like the State Bank of India (SBI), PayTM, and Razorpay showcased their distinct strategies for mitigating such fraudulent activities. SBI presented its Proactive Risk Monitoring (PRM) strategy, while representatives from PayTM and Razorpay shared their successful best practices.
Key takeaways from the deliberations included noteworthy statistics: 70 lakh mobile connections implicated in cybercrime/financial frauds were disconnected via digital intelligence platforms. Furthermore, a staggering Rs. 900 crore of defrauded money was safeguarded, benefiting approximately 3.5 lakh victims.
The meeting chaired by the DFS Secretary served as a pivotal juncture to fortify India’s financial sector against cyber threats. Collaboration among various agencies unveiled potent measures, disconnecting implicated mobile connections and safeguarding substantial defrauded funds. The discussions culminated in a roadmap to shield citizens from the insidious web of financial frauds, showcasing a unified resolve to combat cyber threats in the country’s financial ecosystem.
A research initiative spearheaded by the University of Wollongong (UOW) has secured a substantial grant of AU$445,000 under the Australian Research Council (ARC) Linkage Projects Scheme. The primary focus of this project is to enhance the security protocols for unmanned aerial vehicles (UAVs), commonly known as drones, in the face of potential adversarial machine-learning attacks. The funding underscores the significance of safeguarding critical and emerging technologies, aligning with the strategic vision of the Australian Government.
Heading the project is Distinguished Professor Willy Susilo, an internationally recognised authority in the realms of cyber security and cryptography. Professor Susilo, expressing the overarching goal of the research, emphasised the deployment of innovative methodologies to fortify UAV systems against adversarial exploits targeting vulnerabilities within machine learning models.
Collaborating on this ambitious endeavour are distinguished researchers from the UOW Faculty of Engineering and Information Sciences. The team comprises Associate Professor Jun Yan, Professor Son Lam Phung, Dr Yannan Li, Associate Professor Yang-Wai (Casey) Chow, and Professor Jun Shen. Collectively, their expertise spans various domains essential to the comprehensive understanding and mitigation of cyber threats posed to UAVs.
Highlighting the broader implications of the project, Professor Susilo underscored the pivotal role UAV-related technologies play in contributing to Australia’s economic, environmental, and societal well-being. From facilitating logistics and environmental monitoring to revolutionising smart farming and disaster management, the potential benefits are vast. However, a significant hurdle lies in the vulnerability of machine learning models embedded in UAV systems to adversarial attacks, impeding their widespread adoption across industries.
The project’s core objective revolves around developing robust defences tailored to UAV systems, effectively shielding them from adversarial machine-learning attacks. The research team aims to scrutinise various attack vectors on UAVs and subsequently devise countermeasures to neutralise these threats. By doing so, they anticipate a substantial improvement in the security posture of UAV systems, thus fostering increased reliability in their application for transport and logistics services.
Professor Susilo emphasised that the enhanced security measures resulting from this research would play a pivotal role in bolstering the widespread adoption of UAVs, particularly in supporting both urban and regional communities. This is particularly pertinent given the multifaceted advantages UAVs offer, ranging from efficiency in logistics to rapid response capabilities in disaster management scenarios.
The significance of the project extends beyond academic realms, with Deloitte Access Economics projecting profound economic and employment impacts. The Australian UAV industry is expected to generate a substantial 5,500 new jobs annually, contributing significantly to the nation’s Gross Domestic Product with an estimated increase of AU$14.5 billion by 2040. Additionally, the research outcomes are anticipated to yield cost savings of AU$9.3 billion across various sectors.
The ARC Linkage Program, which serves as the backbone for this collaborative initiative, actively promotes partnerships between higher education institutions and other entities within the research and innovation ecosystem. Noteworthy partners in this venture include Sky Shine Innovation, Hover UAV, Charles Sturt University, and the University of Southern Queensland, collectively contributing to the multidimensional expertise required for the project’s success.
The UOW-led project represents a concerted effort to fortify the foundations of UAV technology by addressing critical vulnerabilities posed by adversarial machine-learning attacks. Beyond the academic realm, the outcomes of this research hold the promise of reshaping Australia’s technological landscape, ushering in an era of increased reliability, economic growth, and job creation within the burgeoning UAV industry.
Researchers at the Pritzker School of Molecular Engineering (PME) at the University of Chicago have harnessed the power of machine learning to revolutionise vaccine design. In a study published in Chemical Science, the team utilised artificial intelligence (AI) to guide the discovery of small molecules called immunomodulators, potentially paving the way for more effective vaccines and robust immunotherapies for cancer treatment.
The challenge lies in navigating a vast chemical space where the number of drug-like small molecules has been estimated to be a 10^60—surpassing the number of stars in the visible universe. To tackle this complexity, the researchers employed machine learning to guide high-throughput experimental screening, providing a systematic and efficient approach to identify molecules that could induce the immune response.
Professor Aaron Esser-Kahn, emphasised, “We used artificial intelligence methods to guide a search of a huge chemical space. In doing so, we found molecules with record-level performance that no human would have suggested we try.”
The AI-guided approach marked a potential first in the field of vaccine design. Professor Andrew Ferguson, who led the machine learning efforts, highlighted the transferability of tools from drug design to immunomodulator discovery. While machine learning is commonly employed in drug design, its application in this manner for immunomodulators is an advancement.
Immunomodulators alter the signalling activity of innate immune pathways within the body, particularly the NF-κB and IRF pathways. These pathways are crucial in inflammation, immune activation, and antiviral response. Previous high-throughput screens identified molecules that enhanced antibody response and reduced inflammation when added to adjuvants in vaccines.
The team integrated the results with a library of nearly 140,000 commercially available small molecules to expand the pool of candidates further. Graduate student Yifeng (Oliver) Tang utilised active learning, a machine learning technique, to efficiently navigate experimental screening through molecular space. This iterative process, guided by the AI model, uncovered high-performing small molecules that had never been identified.
After four cycles and sampling only about 2% of the library, the team discovered molecules that improved NF-κB and IRF activity. One standout molecule demonstrated a three-fold enhancement of IFN-β production when delivered with a STING agonist, holding promise for more potent cancer treatments.
Professor Esser-Kahn highlighted, “The challenge with STING has been that you cannot get enough immune activity in the tumour or have off-target activity. The molecule we found outperformed the best-published molecules by 20%.”
The researchers identified several generalist immunomodulators capable of modifying pathways when co-delivered with agonists. These molecules could have broad applications across various vaccines, simplifying the path to market.
The team plans to continue this process, searching for more molecules and urging collaboration within the scientific community to share datasets for more exploration. Their future goals include screening molecules for specific immune activity, such as activating certain T-cells or discovering combinations that provide better control of the immune response.
In the quest to find molecules that can effectively treat diseases, the intersection of machine learning and immunomodulator discovery opens new possibilities for advancing medical science and developing innovative solutions for vaccine design and cancer immunotherapy.
Optical scientists have devised a novel method to significantly enhance the potency of fibre lasers while preserving their beam quality, positioning them as a pivotal defence technology against low-cost drones and other applications such as remote sensing.
The collaborative effort involved researchers from the University of South Australia (UniSA), the University of Adelaide (UoA), and Yale University, and their achievement is documented in the prestigious scientific journal Nature Communications.
Dr Linh Nguyen, a co-first author of the research and a researcher at UniSA’s Future Industries Institute, elucidates that the innovative approach demonstrated in the study can amplify the power in fibre lasers by three-to-nine times using multimode optical fibre, all the while maintaining beam quality crucial for focusing on distant targets. This technological breakthrough holds immense potential for various applications, with particular emphasis on its role in the defence industry, where high-power fibre lasers play a vital role.
Dr Nguyen underscores the significance of high-power fibre lasers in manufacturing and defence, particularly in the contemporary landscape marked by the widespread use of low-cost unmanned aerial vehicles, commonly known as drones, in modern battle scenarios. He notes that a swarm of inexpensive drones can swiftly deplete missile resources, leaving military assets and vehicles with diminished firing power for missions critical to combat.
In this context, high-power fibre lasers emerge as a strategic solution due to their low cost per shot and the rapidity of light action. This strategic advantage, termed as asymmetric advantage, leverages a cost-effective approach to overpower more expensive, high-tech systems through sheer numerical superiority.
The researcher emphasises the unique role of high-power fibre lasers in providing a viable long-term defence solution, aligning with the concept of asymmetric advantage. This capability not only safeguards against the challenges posed by cheap drones but also aligns with the objectives outlined in the Defence Strategic Review and AUKUS Pillar 2 objectives, offering a deterrent effect that is integral to defence strategies.
Dr Ori Henderson-Sapir, a project investigator at UoA’s Institute for Photonics and Advanced Sensing, places this achievement in the broader context of Australia’s historical prowess in developing innovative fibre optics technologies. He sees this research as propelling Australia into a world-leading position for the next generation of high-power fibre lasers, with applications extending beyond defence to contribute to new scientific discoveries.
The researchers, having successfully demonstrated their technology in fibre lasers, are poised to share their findings at Photonics West, a premier international conference on photonics technology scheduled for early 2024. This platform will offer a global stage for presenting their advancements, fostering collaboration, and advancing the integration of high-power fibre lasers into diverse fields.
The collaborative efforts of researchers from UniSA, UoA, and Yale University have yielded a transformative breakthrough in the realm of optical science. Their innovative approach to increasing the power of fibre lasers, coupled with maintaining beam quality, opens new frontiers for applications ranging from defence against drones to scientific exploration. The implications of this research extend beyond national boundaries, positioning Australia as a frontrunner in the development of cutting-edge fibre optic technologies with global significance.
Australian researchers’ breakthrough in fibre laser technology, achieving three-to-nine times power increase without compromising beam quality, holds significant implications for national defence. With a focus on countering low-cost drones, this innovation aligns with the Defense Strategic Review and AUKUS Pillar 2 objectives. The development, a collaboration between the University of South Australia, the University of Adelaide, and Yale University, positions Australia as a global leader in cutting-edge defence technology.
The government’s emphasis on technological advancements, economic implications, and international collaboration underscores the broader impact of this breakthrough on national security and strategic innovation initiatives.
In an era marked by escalating cyber threats, the Cybersecurity and Infrastructure Security Agency (CISA) is spearheading a pioneering initiative to fortify the resilience of the nation’s critical infrastructure. Over the past few years, the frequency and impact of cyberattacks have surged, disrupting vital operations across various sectors. Notable incidents, such as the Colonial Pipeline ransomware attack, have underscored the vulnerability of critical infrastructure, prompting a proactive response from CISA.
Recognising the evolving threat landscape, CISA is thrilled to unveil a groundbreaking pilot programme tailored to provide cybersecurity shared services on a voluntary basis to entities within critical infrastructure sectors. The initiative comes in the wake of escalating cyber-physical attacks that have demonstrated the potential to disrupt essential functions and, in extreme cases, threaten human life.
Having served as a managed service provider for the federal civilian government, CISA is leveraging its experience and expertise to extend support to non-federal organisations grappling with cybersecurity risks. Empowered by a new congressional authority, CISA aims to deliver enterprise cybersecurity solutions that enhance the resilience of critical infrastructure and contribute to risk reduction, cost savings, and standardisation.
A vital component of this programme is deploying CISA’s Protective Domain Name System (DNS) Resolver to pilot participants. Formerly exclusive to federal civilian agencies, this proven and cost-effective solution utilises U.S. government and commercial threat intelligence to preemptively block systems from connecting to known or suspected domains. The success of CISA’s Protective DNS service is evident in its prevention of nearly 700 million connection attempts from federal agencies to malicious domains since 2022, effectively mitigating risks associated with common cyber threats like ransomware, phishing, and malicious redirects.
By expanding the accessibility of its highly scalable Protective DNS service, CISA is extending critical cybersecurity protections to “Target Rich, Resource Poor” entities within the critical infrastructure landscape. This strategic move aims to provide essential safeguards that have proven instrumental in reducing enterprise risk across federal government agencies.
The ongoing pilot programme involves the identification of critical infrastructure entities interested in adopting CISA-provided commercial shared services. This phase serves to stress-test service delivery mechanisms, demonstrate the scalability of cybersecurity services, and establish CISA’s ability to efficiently acquire, deploy, and operate these services on a large scale. As part of its ‘Target Rich, Resource Poor’ strategy, CISA is collaborating with entities in healthcare, water, and K-12 education sectors during the initial phase, with plans to extend services to up to 100 entities by the end of the year.
In addition to technical deployment, CISA is fostering engagement through roundtables and information sessions with critical infrastructure partners across all sectors and regions. This proactive approach aims to comprehensively understand their unique needs, challenges, and existing capabilities, allowing CISA to tailor its shared services effectively. The insights garnered from these discussions, combined with the results of the Protective DNS pilot, will guide efforts to enhance support for the nation’s critical infrastructure organisations.
As the designated Cyber Defence Agency for the United States, CISA believes that delivering cost-effective, scalable, and innovative cybersecurity solutions to critical infrastructure entities is crucial to fulfilling its national cyber mission. The dynamic nature of the cyber threat environment underscores the urgency of collective cyber defence, and CISA stands ready to meet the evolving challenges, supporting entities in safeguarding the digital backbone of the nation.