public safety

Powered by :

CISA’s Cyber Vigilance in U.S. Education

The Philippines Prioritising Cybersecurity

In an emerging technology, sensor technology is not limited only to the government or private sectors. However, this unprecedented technology is now seamlessly integrated with the nation’s security infrastructure and boasts capabilities that significantly enhance and fortify the defence apparatus. By leveraging this advancement, this integrated sensor system is pivotal in monitoring and safeguarding various aspects of national security, offering unparalleled accuracy and efficiency.

Its cutting-edge features empower authorities to proactively detect and respond to potential threats, ensuring heightened situational awareness. The comprehensive integration of this advanced sensor technology spans critical sectors, including border surveillance, cybersecurity, intelligence gathering, and infrastructure protection.
Besides its prowess in bolstering national security, the integration of advanced sensor technology in the vessel also underscores its commitment to environmental friendliness. Climate change and environmental conservation are critical global priorities in an era where the Bluebottle sets a precedent by aligning military technology with eco-friendly practices.
The New Zealand Government is set to trial an Unmanned Surface Vessel (USV), offering a versatile platform for various roles. The His Majesty’s New Zealand Ship (HMNZS) Aotearoa is transporting the Bluebottle, a USV designed and manufactured by the USV company in Australia, from Sydney to Auckland. Once operational, the Bluebottle will undertake maritime tasks at sea without fuel or personnel, marking a significant step in leveraging digital technology for naval operations.
The Bluebottle, powered by solar, wind, or wave energy, represents a cutting-edge approach to maritime autonomy. It is also equipped with a retractable rigid sail for wind propulsion. The vessel incorporates photo-electric cells on the sail to drive its motor. In the absence of sunlight and wind, the Bluebottle employs a unique flipper and rudder device for steering and propulsion. With a top speed of five knots and the ability to operate at sea indefinitely in challenging wave conditions (up to sea state 7), the USV brings a new dimension to autonomous maritime operations.
This Unmanned Surface Vessels company’s innovative design has already garnered attention and success, with multiple USVs utilised by the Australian Defence Force and collaborations with agencies such as the Australian Border Force and various energy and scientific organisations. The vessel’s capabilities include fishery protection, border protection, and meteorological data collection, showcasing its potential for multifaceted applications.
Critical sensors, including radar, electro-optic, and infrared cameras, ensure the USV’s safe and effective operation, allowing for system control and the identification of other vessels. The Bluebottle will be monitored and operated from a control room at the Devonport Naval Base, utilising mobile phone signals near the shore and high- and low-bandwidth satellite communication when offshore.
The deployment of the USV aligns with the expansive nature of New Zealand’s Exclusive Economic Zone (EEZ), the fifth-largest in the world at over four million square kilometres. Commodore Garin Golding, the RNZN’s Maritime Component Commander, expresses excitement about the potential of the USV in covering the vast oceanic territory and fulfilling search and surveillance tasks efficiently.
Golding noted the positive outcomes observed in partner militaries using unmanned drone aircraft and vessels for similar purposes. The Bluebottle’s autonomous capabilities demonstrated through extended activities in support of the Australian Government, make it an ideal candidate for addressing New Zealand’s maritime challenges.
Commander Andy Bryant, the RNZN’s Autonomous Systems Staff Officer, underscored the SUV’s versatility, emphasising its ability to be transported by trailer to various locations in New Zealand. Launching and recovering the vessel from boat ramps or using ship cranes for deployment during overseas operations highlight the adaptability and accessibility of the Bluebottle.
As New Zealand embraces the potential of the Bluebottle USV, this trial represents a pivotal moment in leveraging digital technology for maritime operations. The fusion of autonomy, renewable energy, and advanced sensors positions the USV as a key player in enhancing naval surveillance, security, and efficiency.
Singapore’s Senior Minister of State for Defence, Heng Chee How, and Senior Minister of State for Communications and Information and Health, Dr Janil Puthucheary, recently visited the Critical Infrastructure Defence Exercise (CIDeX) 2023, underscoring the government’s commitment to fortifying national cybersecurity.

The exercise, held at the National University of Singapore School of Computing, witnessed over 200 participants engaging in operational technology (OT) critical infrastructure defence training.
Organised by the Digital and Intelligence Service (DIS) and the Cyber Security Agency of Singapore (CSA), with support from iTrust/SUTD and the National Cybersecurity R&D Laboratory (NCL), CIDeX 2023 marked a collaborative effort to enhance Whole-Of-Government (WoG) cyber capabilities. The exercise focused on detecting and countering cyber threats to both Information Technology (IT) and OT networks governing critical infrastructure sectors.
This year’s edition boasted participation from DIS, CSA, and 24 other national agencies across six Critical Information Infrastructure (CII) sectors. With an expanded digital infrastructure comprising six enterprise IT networks and three new OT testbeds, participants operated on six OT testbeds within key sectors—power, water, telecom, and aviation.
CIDeX 2023 featured Blue Teams, composed of national agency participants serving as cyber defenders, defending their digital infrastructure against simulated cyber-attacks launched by a composite Red Team comprising DIS, CSA, DSTA, and IMDA personnel. The exercises simulated attacks on both IT and OT networks, including scenarios such as overloading an airport substation, disrupting water distribution, and shutting down a gas plant.
The exercise provided a platform for participants to hone their technical competencies, enhance collaboration, and share expertise across agencies. Before CIDeX, participants underwent a five-day hands-on training programme at the Singapore Armed Forces (SAF)’s Cyber Defence Test and Evaluation Centre (CyTEC) at Stagmont Camp, ensuring readiness for cyber defence challenges.
On the sidelines of CIDeX 2023, the DIS solidified cyber collaboration by signing Memorandums of Understanding (MoUs) with key technology sector partners, expanding its partnerships beyond the earlier agreement with Microsoft earlier in the year.
Senior Minister Heng emphasised the importance of inter-agency cooperation, stating, “CIDeX is a platform where we bring together many agencies throughout the government to come together to learn how to defend together.” He highlighted the collective effort involving 26 agencies and over 200 participants, acknowledging the significance of unity in cybersecurity.
Dr Janil echoed this sentiment, emphasising CIDeX’s role in the Whole-of-Government (WoG) cyber defence effort. He remarked, “Defending Singapore’s cyberspace is not an easy task, and it is a team effort.”
He commended the strong partnership between the Cyber Security Agency of Singapore and the Digital and Intelligence Service, recognising the exercise as a crucial element in strengthening the nation’s digital resilience and national cybersecurity posture.
By leveraging collaboration, innovation, and a robust defence strategy, Singapore aims not just to protect its critical infrastructure but to set a global standard in cybersecurity practices.
CIDeX 2023 serves as a compelling embodiment of Singapore’s unwavering dedication to maintaining a leadership position in cybersecurity practices. This strategic exercise underscores the nation’s commitment to cultivating collaboration and fortifying its resilience against continually evolving cyber threats.
Beyond a training ground for sharpening the skills of cyber defenders, CIDeX 2023 encapsulates the government’s profound commitment to adopting a robust, collaborative, and forward-thinking approach to safeguarding the integrity and security of the nation’s critical infrastructure in the dynamic landscape of the digital age.
The 21st century is frequently called the age of Artificial Intelligence (AI), prompting questions about its societal implications. It actively transforms numerous processes across various domains, and research ethics (RE) is no exception. Multiple challenges, encompassing accountability, privacy, and openness, are emerging.

Research Ethics Boards (REBs) have been instituted to guarantee adherence to ethical standards throughout research. This scoping review seeks to illuminate the challenges posed by AI in research ethics and assess the preparedness of REBs in evaluating these challenges. Ethical guidelines and standards for AI development and deployment are essential to address these concerns.
To sustain this awareness, the Oak Ridge National Laboratory (ORNL), a part of the Department of Energy, has joined the Trillion Parameter Consortium (TPC), a global collaboration of scientists, researchers, and industry professionals. The consortium aimed to address the challenges of building large-scale artificial intelligence (AI) systems and advancing trustworthy and reliable AI for scientific discovery.
ORNL’s collaboration with TPC aligns seamlessly with its commitment to developing secure, reliable, and energy-efficient AI, complementing the consortium’s emphasis on responsible AI. With over 300 researchers utilising AI to address Department of Energy challenges and hosting the world’s most powerful supercomputer, Frontier, ORNL is well-equipped to significantly contribute to the consortium’s objectives.
Leveraging its AI research and extensive resources, the laboratory will be crucial in addressing challenges such as constructing large-scale generative AI models for scientific and engineering problems. Specific tasks include creating scalable model architectures, implementing effective training strategies, organising and curating data for model training, optimising AI libraries for exascale computing platforms, and evaluating progress in scientific task learning, reliability, and trust.
TPC strives to build an open community of researchers developing advanced large-scale generative AI models for scientific and engineering progress. The consortium plans to voluntarily initiate, manage, and coordinate projects to prevent redundancy and enhance impact. Additionally, TPC seeks to establish a global network of resources and expertise to support the next generation of AI, uniting researchers focused on large-scale AI applications in science and engineering.
Prasanna Balaprakash, ORNL R&D staff scientist and director of the lab’s AI Initiative, said, “ORNL envisions being a critical resource for the consortium and is committed to ensuring the future of AI across the scientific spectrum.”
Further, as an international organisation that supports education, science, and culture, The United Nations Educational, Scientific and Cultural Organisation (UNESCO) has established ten principles of AI ethics regarding scientific research.
- Beneficence: AI systems should be designed to promote the well-being of individuals, communities, and the environment.
- Non-maleficence: AI systems should avoid causing harm to individuals, communities, and the environment.
- Autonomy: Individuals should have the right to control their data and to make their own decisions about how AI systems are used.
- Justice: AI systems should be designed to be fair, equitable, and inclusive.
- Transparency: AI systems’ design, operation, and outcomes should be transparent and explainable.
- Accountability: There should be clear lines of responsibility for developing, deploying, and using AI systems.
- Privacy: The privacy of individuals should be protected when data is collected, processed, and used by AI systems.
- Data security: Data used by AI systems should be secure and protected from unauthorised access, use, disclosure, disruption, modification, or destruction.
- Human oversight: AI systems should be subject to human management and control.
- Social and environmental compatibility: AI systems should be designed to be compatible with social and ecological values.
Since 1979, ORNL’s AI research has gained a portfolio with the launch of the Oak Ridge Applied Artificial Intelligence Project to ensure the alignment of UNESCO principles. Today, the AI Initiative focuses on developing secure, trustworthy, and energy-efficient AI across various applications, showcasing the laboratory’s commitment to advancing AI in fields ranging from biology to national security. The collaboration with TPC reinforces ORNL’s dedication to driving breakthroughs in large-scale scientific AI, aligning with the world agenda in implementing AI ethics.
In a significant move aimed at fortifying the nation’s technological landscape, the Vietnam Authority of Information Security (AIS) has underscored the non-negotiable nature of cybersecurity in the current digital landscape.
Emphasising the indispensability of robust cybersecurity measures, the AIS recommended stringent adherence to these protocols across agencies, institutions, and businesses. In today’s digital landscape, the confluence of telecommunications and IT has redefined the contours of security, compelling institutions and businesses to recalibrate their approach to information security.

A workshop dedicated to IT and information security held in Hanoi spotlighted the criticality of information security investment for the digital future. A collaborative effort between AIS, Viettel Cyber Security, and IEC Group, the summit aimed at empowering institutions and businesses to proactively anticipate risks and navigate confidently through the complexities of the digital landscape.
Highlighting the severity of the situation, Nguyen Son Hai, CEO of Viettel Cyber Security observes that the digital transformation wave brings a torrent of information security risks. Viettel Threat Intelligence, for instance, reported 12 million hacked accounts within Vietnam, with 48 million data records compromised and traded in the cyberspace market. Moreover, the stark reality is that numerous entities remain unaware of being under cyberattack.
Financial fraud looms large on this precarious horizon. An alarming revelation showcases the exploitation of 5,800 domain names masquerading as commercial banks, e-wallets, manufacturing firms, and retail giants, posing a severe threat to users’ assets through deceitful means.
Ransomware, an escalating menace, presents formidable challenges to organisations and businesses. Its disruptive potential can cripple entire operations, with cybercriminals extorting exorbitant sums, sometimes reaching millions of dollars, from their victims.
Nguyen Son Hai highlighted the 300 GB of encrypted organisational data published on the Internet, indicating that the actual figures are likely higher, underlining the gravity of the situation.
Tran Dang Khoa from AIS stressed the perennial existence of information security risks, underscoring the urgent need for effective solutions. He outlined five pivotal criteria for cybersecurity solutions: legality, effectiveness, appropriateness, comprehensiveness, and a crucial emphasis on utilising solutions originating from Vietnam.
The paramount importance of legal compliance within cybersecurity frameworks cannot be overstated. Organisations providing online services bear a heightened responsibility to ensure compliance, as information security is mandated by law. Straying from these regulations can render entities liable in the event of security breaches.
Despite substantial investments in sophisticated protection systems, the efficacy of these measures remains questionable if they cannot detect and avert cyberattacks. The challenge lies in optimising system efficiency while rationalising costs – an arduous task that cybersecurity firms endeavour to address.
Khoa acknowledges the need to address existing vulnerabilities alongside fortifying against new threats. Neglecting existing risks within systems, and waiting for opportune moments for cyber assailants, poses significant dangers. Pre-emptive measures must focus on rectifying known vulnerabilities before investing in additional protective tools.
Khoa highlighted that vulnerabilities often emanate not from direct cyberattacks but from individuals within organisations possessing inadequate technological proficiency. Exploiting these individuals can cascade attacks throughout systems, amplifying vulnerabilities exponentially.
Empowering all personnel within organisations with robust cybersecurity knowledge and skills emerges as a pivotal defence mechanism. Khoa accentuated the criticality of imparting such knowledge to safeguard information systems comprehensively.
Furthermore, advocating for the utilisation of ‘Make in Vietnam’ products, solutions, and services assumes significance. Homegrown solutions tailored to address the specific intricacies of Vietnamese organisations offer unique advantages. These domestic solutions not only offer timely support but also demonstrate a deep understanding of local challenges, aiding in swift problem resolution.
As businesses and institutions navigate this dynamic digital terrain, the proactive integration of these strategies is pivotal in safeguarding against the multifaceted threats that loom large in the era of digital proliferation.
All institutions rely on IT to deliver services. Disruption, degradation, or unauthorised alteration of information and systems can impact an institution’s condition, core processes, and risk profile. Furthermore, organisations are expected to make quick decisions due to the rapid pace of dynamic transformation. To stay competitive, data is a crucial resource for tackling this challenge.
Hence, data protection is paramount in safeguarding the integrity and confidentiality of this invaluable resource. Organisations must implement robust security measures to prevent unauthorised access, data breaches, and other cyber threats that could compromise sensitive information.
Prasert Chandraruangthong, Minister of Digital Economy and Society, supports the National Agenda in fortifying personal data protection with Asst Prof Dr Veerachai Atharn, Assistant Director of the National Science and Technology Development Agency, Science Park, and Dr Siwa Rak Siwamoksatham, Secretary-General of the Personal Data Protection Committee, gave a welcome speech. It marks that the training aims to bolster the knowledge about data protection among the citizens of Thailand.
Data protection is not only for the organisation, but it also becomes responsible for the individuals, Minister Prasert Chandraruangthong emphasises. Thailand has collaboratively developed a comprehensive plan regarding the measures to foster a collective defence against cyber threats towards data privacy.
The Ministry of Digital Economy and Society and the Department of Special Investigation (DSI) will expedite efforts to block illegal trading of personal information. Offenders will be actively pursued, prosecuted, and arrested to ensure a swift and effective response in safeguarding the privacy and security of individuals’ data.
This strategy underscores the government’s commitment to leveraging digital technology to fortify data protection measures and create a safer online environment for all citizens by partnering with other entities.
Further, many countries worldwide share these cybersecurity concerns. In Thailand’s neighbouring country, Indonesia, the government has noticed that data privacy is a crucial aspect that demands attention. Indonesia has recognised the paramount importance of safeguarding individuals’ privacy and has taken significant steps to disseminate stakeholders to gain collaborative effort in fortifying children’s security.
Nezar Patria, Deputy Minister of the Ministry of Communication and Information of Indonesia, observed that children encounter abundant online information and content. It can significantly lead them to unwanted exposure and potential risks as artificial intelligence has evolved.
Patria stressed the crucial role of AI, emphasising the importance of implementing automatic content filters and moderation to counteract harmful content. AI can be used to detect cyberbullying through security measures and by recognising the patterns of cyberbullying perpetrators. It can also identify perpetrators of online violence through behavioural detection in the digital space and enhance security and privacy protection. Moreover, AI can assist parents in monitoring screen time, ensuring that children maintain a balanced and healthy level of engagement with digital devices.
Conversely, the presence of generative AI technology, such as deep fake, enables the manipulation of photo or video content, potentially leading to the creation of harmful material with children as victims. Patria urged collaborative discussions among all stakeholders involved in related matters to harness AI technology for the advancement and well-being of children in Indonesia.
In the realm of digital advancements, cybersecurity is the priority right now. Through public awareness campaigns, workshops, and training initiatives, nations aim to empower citizens with the knowledge to identify, prevent, and respond to cyber threats effectively. The ongoing commitment to cybersecurity reflects the country’s dedication to ensuring a secure and thriving digital future for its citizens and the broader digital community.
The introduction of the E-Travel Customs System at Ninoy Aquino International Airport Terminal 1 by the Bureau of Customs (BOC) in conjunction with key stakeholders represents a significant stride in the direction of enhancing national security and streamlining customs processes in the Philippines.

This transformative system, developed in coordination with the Bureau of Immigration (BI), the Banko Sentral ng Pilipinas (BSP), the Anti-Money Laundering Council (AMLC), and the Department of Information and Communications Technology (DICT), marks a significant leap in digitising data collection processes for travellers and crew members arriving in and departing from the Philippines.
The integration of the Electronic Customs Baggage Declaration Form (e-CBDF) and Electronic Currencies Declaration Form (e-CDF) into the BI’s eTravel System is a pivotal step in the evolution of border control practices. This collaborative initiative aims to optimise customs procedures, bolster health surveillance, and facilitate in-depth economic data analysis.
The E-Travel Customs System, a unified digital data collection platform, streamlines the passenger experience at airport terminals. Its standout feature is the integration of the Electronic Customs Baggage and Currency Declaration interface, formerly part of the BOC’s I-Declare System, introduced last year.
Travellers and crew members can now utilise a user-friendly, single web portal that consolidates the border control requirements of the Bureau of Quarantine, BOC, BI and the BSP.
This not only enhances the overall passenger experience but also enables the BOC to receive advanced information for effective risk profiling. Besides, the timely sharing of information with AMLC and BSP strengthens the nation’s commitment to combat money laundering and ensure financial security.
BOC Commissioner Bienvenido Y Rubio expressed confidence in the E-Travel Customs System’s potential to revolutionise customs processes, stating, “This collaborative initiative demonstrates our commitment to innovation and efficiency in customs management.”
The E-Travel Customs System will play a pivotal role in ensuring the security of the borders and fostering a seamless travel experience for all. Commissioner Bienvenido added that they are dedicated to advancing the customs practices, aligning with global standards, and safeguarding the interests of the nation.
The BOC cited that the E-Travel Customs System stands as a testament to the government’s dedication to providing cutting-edge solutions for border control, aligning with international standards, and advancing towards a more secure and efficient customs environment. The collaborative efforts of the BOC, BI, AMLC, BSP, and DICT signify a commitment to innovation, ensuring that the Philippines remains at the forefront of modern customs practices.
The E-Travel Customs System represents a paradigm shift in customs management, transcending mere technological enhancement. It stands as a strategic initiative meticulously designed to reshape and fortify customs practices, infusing them with agility, heightened security, and alignment with global best practices. This innovative system is not merely an upgrade; it is a holistic approach aimed at ushering in a new era of efficiency and adaptability in customs operations.
As the Philippines embraces this technological leap into the future of border control, it reaffirms its unwavering commitment to establishing a customs environment that goes beyond traditional boundaries. The system’s multifaceted capabilities, ranging from streamlined data collection to real-time risk profiling, showcase its transformative potential.
By prioritising technological advancements, the nation aims to enhance the overall travel experience, reduce procedural bottlenecks, and strengthen its position in global efforts to ensure secure and seamless border management.
According to predictions from a global cybersecurity company, the financial sector witnessed increased cybercriminals targeting online payment processing systems in 2020. This phenomenon is becoming more significant as the shopping process transforms into the online realm, making this sector highly vulnerable to cybercrimes. Especially as Christmas and New Year approach, where shopping intensifies substantially, cybercriminals see this moment as a golden opportunity to launch their dangerous actions.
In light of this challenge, amidst the holiday shopping season, the U.S. Department of the Treasury’s Office of Cybersecurity and Critical Infrastructure Protection (OCCIP) is arming consumers with valuable insights on how to stay cyber-secure and avoid falling prey to online scams. In a positive light, this advisory not only recognises the challenges posed by the rise of artificial intelligence (AI) in cybercrimes but empowers individuals to take proactive measures to protect themselves.
This holiday season, as people immerse themselves in the festive spirit, OCCIP urges Americans to stay vigilant, be proactive, and respond immediately if targeted by scammers or fraud. Rather than emphasising the negative aspects of potential cyber threats, OCCIP’s approach is geared towards equipping consumers with the knowledge and tools to navigate the digital landscape securely.
Deputy Assistant Secretary for OCCIP, Todd Conklin, emphasises the need for consumers to exercise caution and critical thinking during online transactions. He noted that every year, cybercriminals are getting more creative to take advantage of consumers, and this year is no different with the rise of AI. However, Conklin encouraged individuals to approach online deals discerningly instead of fostering fear. The advisory recommends thinking, researching, and consulting with trusted individuals before purchasing, cultivating a positive and empowering mindset.
To further emphasise the positive message, the OCCIP advisory raises awareness about potential risks and offers a comprehensive set of constructive tips for consumers. In empowering individuals with self-efficacy, the advisory aims to instil confidence in consumers as they navigate the online marketplace during the festive season.
Moreover, the OCCIP advisory takes a proactive stance by furnishing victims of fraud with practical steps to mitigate damages and losses. This approach is strategically designed to reassure individuals that, even in the unfortunate event of falling prey to a scam, there are actionable measures they can promptly undertake to rectify the situation and minimise the impact on their finances.
The OCCIP recognises the evolving landscape of cyber threats, especially with cyber criminals’ increased integration of artificial intelligence. By acknowledging the challenges posed by AI-driven phishing attacks, the advisory positions itself as a warning system and a guide for consumers to navigate the complexities of an online environment fraught with potential risks.
As part of its positive outreach, OCCIP emphasises the importance of trusting one’s instincts and not succumbing to pressure while making online transactions. The advisory suggests that if an online deal appears too good to be true, it likely is. This messaging aims to empower consumers with the confidence to make informed decisions and resist the tactics employed by cybercriminals.
OCCIP provides the public with a copy of the advisory in the cooperative mindset of sharing information. This transparency and openness foster community and shared responsibility in tackling cyber threats. Through this stride of individuals to report fraud incidents to the Federal Trade Commission and the Federal Bureau of Investigation’s Internet Crime Complaint Centre (IC3), OCCIP promotes a collective effort in combating cybercrimes.
This news underscores that the responsibility for addressing the cyber threat is not solely on individuals but is a collective effort that people must tackle together. As the U.S. prepares for its festive holidays, other countries must also emphasise the importance of collectively addressing the cyber threat during this upcoming season. By implementing these measures collaboratively, OpenGov believes that cybersecurity measures will foster a safer online environment for everyone.
Liming Zhu and Qinghua Lu, leaders in the study of responsible AI at CSIRO and Co-authors of Responsible AI: Best Practices for Creating Trustworthy AI Systems delve into the realm of responsible AI through their extensive work and research.

Artificial Intelligence (AI), currently a major focal point, is revolutionising almost all facets of life, presenting entirely novel methods and approaches. The latest trend, Generative AI, has taken the helm, crafting content from cover letters to campaign strategies and conjuring remarkable visuals from scratch.
Global regulators, leaders, researchers and the tech industry grapple with the substantial risks posed by AI. Ethical concerns loom large due to human biases, which, when embedded in AI training, can exacerbate discrimination. Mismanaged data without diverse representation can lead to real harm, evidenced by instances like biased facial recognition and unfair loan assessments. These underscore the need for thorough checks before deploying AI systems to prevent such harmful consequences.
The looming threat of AI-driven misinformation, including deepfakes and deceptive content, concerning for everyone, raising fears of identity impersonation online. The pivotal question remains: How do we harness AI’s potential for positive impact while effectively mitigating its capacity for harm?
Responsible AI involves the conscientious development and application of AI systems to benefit individuals, communities, and society while mitigating potential negative impacts, Liming Zhu and Qinghua Lu advocate.
These principles emphasise eight key areas for ethical AI practices. Firstly, AI should prioritise human, societal, and environmental well-being throughout its lifecycle, exemplified by its use in healthcare or environmental protection. Secondly, AI systems should uphold human-centred values, respecting rights and diversity. However, reconciling different user needs poses challenges. Ensuring fairness is crucial to prevent discrimination, highlighted by critiques of technologies like Amazon’s Facial Recognition.
Moreover, maintaining privacy protection, reliability, and safety is imperative. Instances like Clearview AI’s privacy breaches underscore the importance of safeguarding personal data and conducting pilot studies to prevent unforeseen harms, as witnessed with the chatbot Tay generating offensive content due to vulnerabilities.
Transparency and explainability in AI use are vital, requiring clear disclosure of AI limitations. Contestability enables people to challenge AI outcomes or usage, while accountability demands identification and responsibility from those involved in AI development and deployment. Upholding these principles can encourage ethical and responsible AI behaviour across industries, ensuring human oversight of AI systems.
Identifying problematic AI behaviour can be challenging, especially when AI algorithms drive high-stakes decisions impacting specific individuals. An alarming instance in the U.S. resulted in a longer prison sentence determined by an algorithm, showcasing the dangers of such applications. Qinghua highlighted the issue with “black box” AI systems, where users and affected parties lack insight into and means to challenge decisions made by these algorithms.
Liming emphasised the inherent complexity and autonomy of AI, making it difficult to ensure complete compliance with responsible AI principles before deployment. Therefore, user monitoring of AI becomes crucial. Users must be vigilant and report any violations or discrepancies to the service provider or authorities.
Holding AI service and product providers accountable is essential in shaping a future where AI operates ethically and responsibly. This call for vigilance and action from users is instrumental in creating a safer and more accountable AI landscape.
Australia is committed to the fair and responsible use of technology, especially artificial intelligence. During discussions held on the sidelines of the APEC Economic Leaders Meeting in San Francisco, the Australian Prime Minister unveiled the government’s commitment to responsibly harnessing generative artificial intelligence (AI) within the public sector.
The DTA-facilitated collaboration showcases the Australian Government’s proactive investment in preparing citizens for job landscape changes. Starting a six-month trial from January to June 2024, Australia leads globally in deploying advanced AI services. This initiative enables APS staff to innovate using generative AI, aiming to overhaul government services and meet evolving Australian needs.