We are creating some awesome events for you. Kindly bear with us.

public safety

Powered by :

A nation, Vietnam, well-known for its vibrant economy and youthful population, is poised to seize the transformative potential of generative artificial intelligence (GenAI). The country’s vigorous uptake of cutting-edge technology creates an ideal environment for GenAI’s development and implementation.

The Nation Survey 2023 highlighted Vietnam as a frontrunner in embracing GenAI, with an impressive 91% of surveyed individuals expressing interest in the technology, the highest among all markets surveyed. This enthusiasm positions Vietnam at the forefront of GenAI adoption, promising significant opportunities for growth and innovation.

According to recent data, the Vietnamese generative AI market is projected to reach US$153.80 million by 2024, with an anticipated annual growth rate (CAGR 2024-2030) of 23.20%. This growth trajectory is expected to result in a market volume of US$537.70 million by 2030.

Despite the significant growth of generative AI, industry leaders are proceeding with caution in its adoption. Multiple constraints, including cybersecurity concerns, privacy considerations and the complexities of governance and compliance, contribute to this guarded approach.

According to a study by IBM IBV, 84% of executives see cybersecurity risks as the main hurdle to adopting generative AI. The concerns surrounding generative AI-generated threats are particularly pronounced in Vietnam, given the country’s ongoing cybersecurity challenges.

The National Cyber Security Centre (NCSC) reported a significant surge in cyberattacks in 2023, recording a notable 13,900 incidents. This alarming statistic indicates a worrisome increase of 9.5% from the previous year, positioning Vietnam as the third highest in Southeast Asia for the number of cyberattacks.

Additionally, the use of generative AI applications can heighten data and privacy risks due to their reliance on large language models and the generation of new data. This introduces vulnerabilities such as bias, poor data quality, and risks of unauthorised access.

Given the security risks inherent in generative AI technology, organisations must bolster their cyber defences to safeguard valuable assets. Proactively addressing these concerns is pivotal for ensuring a safe and successful deployment. Careful consideration and robust measures are needed to ensure data and privacy protection throughout the AI lifecycle.

In Vietnam’s current landscape, organisations must enhance their protection against generative AI-related threats. Developing strategies and effective measures to address and mitigate these challenges is paramount.

To ensure the security of generative AI usage and readiness for AI integration, organisations should implement robust encryption and access controls. Additionally, clear incident response protocols and continuous monitoring are crucial for swiftly addressing potential security threats to AI training data. These measures enhance defence against unauthorised access, protecting the integrity and confidentiality of AI training data.

Deploying advanced anomaly detection algorithms is crucial for securing AI model usage by identifying potential data or prompt leakage. Real-time alerting mechanisms for evasion, poisoning, extraction, or inference attacks also bolster overall defence against malicious activities, ensuring robust protection of AI systems.

To strengthen defences against emerging threats, organisations can utilise behavioural defences and multi-factor authentication to guard against new AI-generated attacks such as personalised phishing, AI-generated malware, and fake identities. Incorporating these proactive security measures enhances resilience and effectively mitigates the evolving landscape of AI-driven threats, ensuring a strong and adaptive security posture.

In the uncertain and evolving GenAI landscape, organisations are actively seeking trustworthy technology partners to collaboratively develop and implement secure strategies. The OpenGov Asia Breakfast Insight on 19 March 2024 at Sofitel Saigon Plaza Vietnam, delved into the latest trends and challenges in cybersecurity, particularly in the context of Vietnam’s adoption of generative artificial intelligence (GenAI).

Experts and industry leaders discussed the importance of implementing robust security measures, such as behavioural defences and multi-factor authentication, to mitigate emerging threats like personalised phishing, AI-generated malware, and fake identities. These discussions were vital to maintaining a resilient and adaptable security posture in the era of AI.

Opening Remarks

Mohit Sagar∶ robust security measures not only protect sensitive information and prevent manipulation but also ensure the responsible development and long-term viability of AI

Artificial Intelligence (AI) is rapidly emerging as a transformative force in today’s landscape, with 84% of organisations citing cybersecurity risks as the primary obstacle to its adoption. Mohit Sagar, CEO and Editor-In-Chief of OpenGov Asia emphasised the importance of navigating the evolving regulatory landscape and AI governance frameworks to mitigate these risks.

However, even though there has been a significant expansion of generative AI, organisations are moving carefully. This tentative approach is due to many issues, including cybersecurity, privacy and the complexity, and sometimes ambiguity, of governance and compliance.

“In Vietnam, the apprehension surrounding AI-generated threats is particularly elevated due to the country’s ongoing cybersecurity challenges,” Mohit asserts. “Securing Artificial Intelligence is of paramount importance as it safeguards against potential threats that can compromise data integrity, ethical considerations, and the overall trustworthiness of AI systems.”

In 2023, the National Cyber Security Centre (NCSC) reported a significant surge in cyberattacks, reaching 13,900 incidents. This alarming statistic signifies a worrisome 9.5% increase from the previous year, positioning Vietnam as the third highest in Southeast Asia for the number of cyberattacks.

Cyber solutions are poised to lead the market, with a projected volume of US$204.60 million in 2024. Looking ahead, the cybersecurity market in Vietnam is expected to witness a robust CAGR of 15.21% from 2024 to 2032. The growth will be fueled by factors including increased internet usage, ongoing digital transformation, rising cyber threats, regulatory compliance, heightened public awareness, adoption of advanced technologies, infrastructure modernisation, and international collaborations.

Addressing current and future challenges comprehensively will position Vietnam to harness the benefits of AI while mitigating potential risks, fostering economic growth, and improving the quality of life for its citizens.

Mohit explains that robust security measures not only protect sensitive information and prevent manipulation but also ensure the responsible development and long-term viability of AI, fostering confidence in its adoption across diverse applications and industries.

The country can benefit from AI adoption in several ways:

  1. Preserving Data Integrity and Confidentiality: AI can help protect sensitive information and ensure that data remains secure and private.
  2. Mitigating Manipulation and Exploitation Risks: By implementing robust security measures, AI systems can be protected against manipulation and exploitation by malicious actors.
  3. Maintaining AI Resilience: Ensuring that AI systems are resilient to cyber threats and can continue to function effectively even in the face of attacks.
  4. Building Trust in AI Technology: Building trust among users and stakeholders by demonstrating the security and reliability of AI systems.
  5. Ensuring Long-Term Viability: Implementing measures to ensure that AI systems remain viable and effective over the long term.

Through collaboration between entities, including government agencies, private sector organisations, and academia, Vietnam can leverage collective expertise and resources to bolster its cybersecurity defences. By enhancing digital infrastructure, such as upgrading network systems and deploying advanced cybersecurity technologies, the nation can create a more secure environment for AI adoption.

Additionally, promoting ethical AI practices ensures transparency and accountability, building trust among citizens and stakeholders and ultimately strengthening resilience against cyber threats.

Vietnam should focus on promoting responsible AI use by implementing ethical standards, ensuring transparency in algorithms, educating stakeholders on ethical implications, and establishing regulatory frameworks to build societal trust and acceptance.

“In navigating the intricate landscape of AI, securing its integrity and ensuring transparency isn’t just a matter of protection,” Mohit concludes. “It’s about safeguarding trust, ethics and the very fabric of our digital future.”

Welcome Address

Khang Nguyen Tuan∶ AI plus automation enables cybersecurity teams to use deploy human expertise where it is needed most

Khang Nguyen Tuan, Security FLM Leader, ASEAN at IBM, delved into the complexities surrounding artificial intelligence (AI), providing a nuanced definition of Ethical AI as the development and deployment of AI systems that prioritise fairness, transparency, accountability, and respect for human values.

AI ethics revolves around comprehending the ramifications of AI on individuals, groups, and society as a whole, aiming to ensure safe and responsible AI utilisation, mitigate potential risks associated with AI, and prevent harm.

He underscores the critical importance of raising awareness about Ethical AI, particularly in light of AI’s pervasive integration across all sectors. This emphasis comes as the global AI market is projected to experience substantial growth, with an annual increase of 19.6%, reaching a staggering US$500 billion by 2023.

While AI and automation offer significant benefits such as increased efficiencies, greater innovation, personalised services, and reduced burden on human workers, they also present new risks and impacts that need to be addressed. This underscores the importance of prioritising Ethical AI principles in AI development and deployment.

The impact of AI in the insurance sector for instance, demonstrates how it can result in minority individuals receiving higher automotive insurance quotes and can ensure white patients are prioritised over sicker black patients for healthcare interventions. In law enforcement, algorithms used to predict recidivism can be biased against black defendants, assigning them higher risk scores than white counterparts even when controlling for factors like prior crimes, age, and gender.

To ensure Ethical AI, expertise in computer science, AI policy, and governance is essential, ensuring adherence to best practices and codes of conduct throughout system development and deployment. This multifaceted approach fosters a comprehensive understanding of ethical considerations, enabling the implementation of robust safeguards and mechanisms to uphold ethical principles in AI development and deployment.

“Taking proactive steps is crucial to managing unethical AI and staying ahead of upcoming regulations. Regardless of the stage of system development, measures can always be implemented to enhance the ethical standards of AI,” Khang says. “This is critical for companies to safeguard their reputation, assure compliance with evolving legislation, and deploy AI with increased confidence.”

Khang shares IBM’s proactive stance in promoting AI ethics and combating cyberattacks through AI technologies. IBM has developed a comprehensive framework for AI ethics, guiding data scientists and researchers to build AI systems that align with ethical principles and benefit society at large.

IBM’s Principles for Trust and Transparency serve as the cornerstone of their approach to AI ethics, influencing every aspect of AI development and deployment. These principles guarantee that IBM’s AI technologies are designed to enhance human intelligence, empowering individuals to achieve more while maintaining the highest standards of trustworthiness and transparency.

Moreover, IBM prioritises the active defence of AI-powered systems against adversarial attacks, aiming to minimise security risks and instil confidence in system outcomes. Khang emphasised IBM’s belief that AI should improve productivity and be accessible to all – not just a select few – underscoring the company’s commitment to democratising the benefits in the AI era.

“As we navigate the complexities of AI, expertise in computer science, AI policy, and governance becomes imperative to ensure adherence to best practices and codes of conduct throughout system development and deployment,” concludes Khang. “This approach not only safeguards against potential risks but also ensures the inclusive and fair deployment of technology.”

Technology Insight

Shaibal Saha∶ AI-enabled security and automation can contain breaches faster and more effectively

Shaibal Saha, IBM’s Asia Pacific Digital Trust Leader, underscores the significance of AI in the Asia Pacific region, emphasising its increasing presence and potential impact across various industries and sectors.

“Similar to transformative technologies like steam engines, computers, and the Internet in history, digital technology has profoundly reshaped human society at an unprecedented pace and scale in the past two decades,” Shaibal says.” It has significantly bolstered socio-economic creativity and growth.”

Amidst these transformative opportunities, the Asia-Pacific region has stepped into the golden age of the digital economy, experiencing GDP growth rates surpassing 5% in numerous Asian countries in 2022. Notably, APAC has emerged as the fastest-growing AI market worldwide.

Excluding Japan, APAC’s investments in new technologies such as AI account for close to 40% of its total information communication technology (ICT) investments by the end of 2023. This growth trajectory is anticipated to continue for at least the next decade, far outpacing the rest of the world, which maintains a rough growth rate of 22%.

Despite the benefits of AI, significant concerns persist regarding the legal and ethical implications surrounding its implementation. Recent global data breaches have instilled widespread apprehension and reluctance towards data storage, deterring many potential users from venturing into unfamiliar technological landscapes. The challenges encountered in AI deployment and usage in APAC mirror those experienced worldwide.

“AI is useless without troves of data, but enterprises holding AI-processable data ought to ask a number of questions,” cautions Shaibal. “Given that most data used by AI is stored in the cloud, businesses must carefully consider their cloud storage provider’s security, support, and maintenance capabilities.”

Additionally, they should assess whether they are housing personal information, whether data has been de-identified or anonymised, and have robust data breach response plans in place. Alongside those considerations, businesses must address the ownership of such data and the data outputs containing proprietary rights.

Algorithms play a crucial role as the foundation of all systems, with many companies increasingly depending on them to make significant decisions. However, the potential for AI and algorithms to enhance business and social welfare also brings about material ethical risks.

Bias has been observed in the operations of some algorithms, prompting growing calls for a deeper understanding of their ethical implications. This includes advocating for transparency and providing more information regarding how these machines are trained and operate

However, current privacy laws often fail to satisfy companies seeking increased transparency or constraints on decision-making without human involvement. Nonetheless, some advocate for a “right to explanation”, allowing individuals to question automated decisions that impact them by understanding how algorithms operate.

Indeed, the aforementioned issues are just a few of the primary concerns identified by experts that require consideration by businesses and technology procurement teams. Given the rapid evolution of these legal areas, businesses may require assistance to stay abreast of local regulatory changes.

IBM is actively working to tackle these challenges by offering dependable and transparent AI solutions while advocating for compliance with relevant regulations. One crucial step in this process involves ensuring that companies’ AI systems can furnish sufficient explanations regarding decision-making processes, thereby empowering humans to comprehend and scrutinise automated decisions.

Additionally, IBM can assist in monitoring local regulatory changes related to technology, ensuring that companies remain compliant with applicable laws and can adapt their strategies accordingly.

“By providing ongoing updates and guidance on evolving regulatory landscapes, IBM helps organisations navigate complex legal frameworks while maintaining ethical and transparent AI practices, “ Shaibal concluded.

Closing Remarks

Khang expressed his appreciation for the enthusiasm and contributions of the participants at the OpenGov Asia Breakfast Insight. He believes that such opportunities provide a valuable platform for exchanging ideas and concepts concerning the security challenges in adopting artificial intelligence (AI).

Khang reiterated the importance of forming a clear vision for deploying AI to ensure that organisations safeguard their AI ecosystems while harnessing the transformative potential of this technology to the fullest extent.

The Vietnam Cybersecurity Market is forecasted to experience substantial growth with a CAGR of 16.8% by 2027. This growth is propelled by the increasing demand for digitalisation and scalable IT infrastructure. Notably, Vietnam achieved a commendable rank of 25th out of 194 countries in the Global Cybersecurity Index (GCI) in 2020, indicating a positive trajectory in cybersecurity efforts.

Vietnam, as a pivotal member of ASEAN, holds a significant position in advancing AI technology within the region. Despite rising cybersecurity concerns, the country has witnessed a decline in cyberattacks in recent years. However, challenges persist within the cybersecurity landscape.

Alongside the advancement of AI technology, there are many risks and challenges, including cyberattacks, such as phishing, data breaches, and others.

Conducting regular cyber risk assessments, ensuring system access is protected by strong passwords and multifactor authentication, and developing a cybersecurity strategy are all effective ways to keep criminals at bay.

“Every year, cybercriminals make millions of dollars by finding security vulnerabilities in computer systems to exploit or trick companies into giving them system access,” acknowledges Khang. “Firms can minimise cyberattack impact by regularly backing up their critical information and having a clear response plan in case of a security breach.”

Mohit concurs that companies must have a well-prepared response strategy in place. Such a strategy should entail identifying the individuals responsible for managing the situation, determining the sequence of informing relevant parties about the incident and specifying the appropriate response protocols. Immediate actions, such as changing passwords or isolating compromised equipment, may be imperative in certain cases.

Further, firms could opt to conduct business continuity exercises to ensure that their processes and procedures are not only in place but also strictly followed and well understood by all relevant parties. These exercises could involve practising switching to an alternative system and restoring data using online and offline backups.

“Establishing a clear response plan empowers firms to minimise the impact of cyberattacks and reduce company downtime,” Mohit concludes. “A proactive approach enables organisations to effectively mitigate potential damage and maintain operational continuity.

The Cyber Security Agency of Singapore (CSA) is dedicated to securing Singapore’s cyberspace to support national security, power the digital economy, and protect the digital way of life. To reinforce national security, CSA continually monitors cyber threats, defends critical information infrastructure (CII), and implements mitigation measures to safeguard essential services.

The Singapore Cyber Emergency Response Team (SingCERT) responds to cybersecurity incidents for its Singapore constituents. It was set up to facilitate the detection, resolution and prevention of cybersecurity-related incidents on the Internet.

Singapore, represented by the CSA has been working closely with ASEAN Member States (AMS) to establish the ASEAN Regional Computer Emergency Response Team (CERT) to promote and facilitate information-sharing related to cyber incident response, and to complement the operational efforts by individual national CERTs in each AMS.

Singapore had made the recommendation for a single AMS to host the ASEAN Regional CERT and proposed to host and fund its physical activities in Singapore at the 14th ASEAN Network Security Action Council in August 2023.

Image credits: Association of Southeast Asian

The ASEAN Regional CERT will enable stronger regional cybersecurity incident response coordination and critical information infrastructure (CII) protection cooperation, including for cross-border CII such as banking and finance, communications, aviation and maritime.

The 4th ASEAN Digital Ministers Meeting (ADGMIN) convened in Singapore in February, to address the multifaceted challenges and opportunities in the digital realm, particularly amid the ongoing COVID-19 pandemic.

The meeting recognised advancements in implementing the ASEAN Digital Masterplan 2025 (ADM 2025) despite the pandemic and stressed the need for a robust and inclusive digital ecosystem. The ADM 2025 Mid-Term Review (MTR) assessed progress in key areas including trusted digital services, consumer protection, and broadband infrastructure

The meeting highlighted the need to set governance standards for emerging technologies like AI, based on recommendations from the ADM 2025 MTR. It also emphasised the importance of collaborating on digital infrastructure and fostering trust among users for secure data sharing.

The endorsement of the ASEAN Guide on AI Governance and Ethics marked a significant milestone, reflecting the region’s commitment to harnessing AI technologies responsibly. The guide, which includes practical use cases for trustworthy AI deployment, is poised to serve as a valuable tool for promoting the responsible and ethical utilisation of AI solutions across ASEAN.

Additionally, the meeting welcomed initiatives aimed at enhancing regional cybersecurity capabilities, such as the establishment of the ASEAN Regional CERT. This initiative is expected to bolster incident response capabilities and facilitate timely information sharing and best practice exchange among ASEAN member states.

Moreover, the meeting acknowledged the importance of data governance and privacy protection in fostering digital trust. Efforts to promote the adoption of the ASEAN Model Contractual Clauses and facilitate seamless data transfers between ASEAN and the European Union were commended as significant steps towards enhancing regional data governance frameworks.

The meeting also highlighted the significance of digital infrastructure development, including the advancement of 5G networks and the establishment of frameworks to facilitate cross-border data flows, particularly in areas such as disaster management and logistics for rural areas.

In the realm of international cooperation, the meeting affirmed ASEAN’s commitment to deepening collaboration with dialogue and development partners, including China, Japan, the Republic of Korea, India, the United States, the European Union, ITU, and APT. These partnerships are crucial for advancing digital transformation, cybersecurity, and capacity-building efforts across the region.

Overall, the 4th ADGMIN underscored the collective resolve of ASEAN member states to navigate the evolving digital landscape, fostering innovation, inclusivity, and resilience to realise the full potential of the digital economy for the benefit of all stakeholders.

Following the endorsement of the financial model, Singapore will continue to work closely with AMS to operationalise the ASEAN Regional CERT to enhance collective cybersecurity within the region.

Artificial Intelligence (AI) has permeated all aspects of human life, including its crucial role in defence and security, which has become a focal point, particularly in the Asia-Pacific region. The integration of AI in defence has sparked extensive debates on its implications for national security, military strategies, and ethical considerations, indicating the depth of its impact and the need for careful evaluation.

Image credits: brin.go.id

One of the primary concerns revolves around how AI could revolutionise military, security, and defence operations. This revolution introduces concepts like autonomous weapons systems, unmanned vehicles, and cyber warfare capabilities, marking a significant shift in how AI is adopted in these fields.

Moreover, there is a growing interest in understanding how AI will shape defence strategies and operations by 2035, potentially altering the balance of power in the region and leading to new alliances and strategic rivalries.

While AI advancements promise strategic advantages, they also raise ethical dilemmas, especially regarding the use of AI in making life-or-death decisions, highlighting the need for robust ethical frameworks and guidelines. The evolving nature of AI and its rapid advancements necessitate continuous monitoring and evaluation to ensure its responsible and ethical use in defence and security contexts.

Anto Satriyo Nugroho, former Head of the Research Centre for Artificial Intelligence and Cyber Security (PRKAKS) at the Indonesian Agency for Research and Innovation (BRIN), emphasised the pivotal role of various AI technologies in advancing research in defence and security. He highlighted technologies like Computer Vision, Machine Learning (ML), Cyber Security, Natural Language Processing (NLP), and others, underlining their importance in enhancing defence and security systems’ capabilities.

Further, Achmad Farid Wadjdi, an Associate Expert Engineer at PRKAKS-BRIN, discussed the importance of understanding the concept of national defence, particularly in the context of the Internet of Battlefield Things (IoT) and its applications in modern combat operations and smart warfare. He emphasised the need to ensure security in military operations when deploying IoT technologies, indicating the complexity and critical nature of AI integration in defence systems.

Conversely, Eddy Maruli Tua Sianturi explained the conceptualisation of measuring the State Defence Index (IBN) to understand better citizens’ sense of pride, patriotism, nationalism, and willingness to defend the country. The IBN measurement provides a nuanced approach to grasping current socio-political dynamics. Still, it also requires addressing challenges such as data bias, privacy concerns, and security issues, highlighting the multidimensional nature of AI’s impact on defence and security.

PRKAKS-BRIN Associate Engineer Jemie Muliadi introduced the Intelligent Control System method for law enforcement and state sovereignty applications in a related context. This method effectively manages complex systems that are challenging to simplify, those with cross-coupling that are difficult to separate, and systems with significant parameter changes over time. Jemie emphasised that this method ensures precise control in fast-moving and uncertain situations, particularly in law enforcement and state sovereignty contexts, showcasing the versatility and potential of AI in enhancing national defence and security operations.

Integrating AI in defence and security represents a significant advancement with far-reaching implications. While AI offers numerous benefits in enhancing defence capabilities, it also poses ethical, legal, and security challenges that must be addressed through collaborative efforts between governments, researchers, and industry stakeholders.

By fostering responsible AI development and deployment practices, the Asia-Pacific region can harness AI’s transformative power while ensuring its citizens’ safety, security, and well-being. This approach involves developing robust AI governance frameworks, ensuring transparency and accountability in AI systems, and promoting international cooperation to address common AI-related challenges.

“In advancing the defence and security with AI, Indonesia’s security will benefit from the strategic integration of AI technologies. These advancements can enhance Indonesia’s military capabilities, improve situational awareness, and strengthen its ability to respond to security threats effectively,” Jemie concluded.

The increasing prevalence of internet usage among young people presents a pressing need to protect them from exposure to harmful content, necessitating stronger regulations and heightened parental awareness to ensure their safety online. A recent report from a prominent advocacy organisation in New Zealand, urging more comprehensive and stringent regulations on online content, underscores the pressing need to bolster safeguards ensuring the safety of children’s digital interactions.

With the advent of the internet, young individuals have gained access to an unprecedented array of content, ranging from educational and informative to entertaining; however, this digital landscape has also exposed them to graphic imagery, adult material, and objectionable content, posing challenges in safeguarding them from such risks, including exposure to illegal sexual content, a concern faced not only by New Zealand but also by numerous countries worldwide.

The voluntary system administered by the Department of Internal Affairs in New Zealand currently blocks more than 400 websites depicting child sex abuse. Social Worker Rachel Taane has observed the psychological harm caused by exposure to illegal sexual content, noting that it can normalise harmful behaviours and create significant distress. She emphasises that children often feel embarrassed or afraid to seek help, fearing punishment or having their devices taken away.

Despite efforts by most internet providers to participate in a voluntary digital child exploitation filtering system, there is still much to be done. Internal Affairs Minister Brooke van Velden acknowledges the system’s successes in blocking harmful material but recognises the need for improvement. She emphasises the importance of balancing censorship and protecting children from harmful content while respecting freedom of expression.

The Makes Sense campaign has been actively advocating for better protection for children online, with an online petition signed by 10,000 individuals calling for stronger filters on illegal sexual behaviour. Organisers like Holly Brooker highlight the need for New Zealand to catch up with international standards, citing the UK foundation as an example of effective web-crawling and hashing technology to block child sexual abuse material.

The petition co-founder, Jo Robertson, echoed the concerns raised by the Department of Internal Affairs (DIA) about the alarming increase in this type of content, emphasising the need for immediate action to address the issue. Despite the challenges, there is a collective call for greater protection for children online and a recognition that more can be done to prevent accidental exposure to harmful content. Children are frequently targeted by cyber risks, making them vulnerable.

OpenGov Asia has reported that New Zealand has prevented harm to vulnerable communities. At the start of the 2024 academic year, law enforcement agencies urged parents and caregivers to be cautious when sharing back-to-school photos of their children online. While it is common to celebrate such milestones, authorities stress the importance of taking privacy precautions to shield children from potential risks in the digital realm.

Parents often share images of their children in school uniforms or at educational institutions, unknowingly disclosing identifying details that could be exploited. While such incidents are relatively rare, instances of inappropriate image use, including their inclusion in child exploitation material, underscore the importance of heightened awareness.

In response to these potential dangers, authorities advise parents and caregivers to take proactive measures to ensure their children’s online safety and protect their personal information. Police are recommending some essential tips to enhance online safety.

Similarly, as the government endeavours to enhance filters and upgrade the current system, it is paramount for parents to maintain vigilance over their children’s online activities. Utilising accessible parental control filters can help restrict access to inappropriate content and mitigate potential risks.

Safeguarding children from online harm requires a collaborative effort from all stakeholders, including governments, internet service providers, and families. Together, they can work towards creating a safer online environment for everyone.

In ASEAN, tackling the dissemination of disinformation emerges as a significant concern, particularly heightened during pivotal periods such as the COVID-19 pandemic or political cycles. The impact of misinformation on societal perceptions and decision-making processes underscores the need for concerted efforts to address it, with practitioners in government public relations navigating the complexities of managing information amidst this evolving landscape on a daily basis.

Previously, Indonesia developed guidelines for combating fake news and disinformation as a guide for ASEAN countries in combating hoaxes, aiming to provide adaptive information management mechanisms for emerging issues. The Indonesian government has also issued three comprehensive initiatives to handle hoaxes:

  1. Upstream, by enhancing Human Resources (HR) capacity through digital literacy to increase public understanding of the importance of critically filtering received information.
  2. The government collaborates with social media managers to strengthen monitoring and enforcement against perpetrators of false information on these platforms, thereby enhancing effectiveness in combating hoaxes.
  3. The government collaborates with agencies and institutions such as law enforcement and other relevant ministries/agencies to handle hoax-related cases directly technically, thus quickly addressing the negative impacts caused by false information.
Image credits: kominfo.go.id

To demonstrate its commitment to this issue, the Director-General of Public Information and Communication of the Ministry of Communication and Information (Kominfo), Usman Kansong, stated that they are preparing strategic communication and crisis communication strategies related to the spread of disinformation in collaboration with the Government Communications Service International (GCSI) of the British government.

“I think this collaboration is very important because, in the digital age, we face what is called information disorder. So, with this workshop, we can formulate strategies, for example, to convey government programmes or how to tackle disinformation,” he explained.

The Government Communications Service International (GCSI) is a crucial part of the British Government Communications Service (GCS), with broad responsibilities for managing public communication and communication strategies for the UK government. GCSI not only works with local government agencies but also with international government agencies and private institutions to develop cooperation and manage effective public communication activities.

Director-General Usman Kansong explained that the three-day workshop was attended by 20 participants from government and institutional public relations. According to him, in the workshop, representatives from Indonesia and the UK discussed experiences and frameworks in dealing with information disorder or information chaos.

“To formulate strategies to create a disinformation handling programme in the digital era. This can be used as material for Indonesia and possibly the UK to formulate communication strategies more effectively,” he said.

The Director-General of Public Information and Communication of the Ministry of Communication and Information (Kominfo) stated that the workshop was the first step towards long-term cooperation in public and digital communication.

“We can go to the UK (for a study visit). Because the UK has what is called National Security Communication, we can learn from the UK how to mobilise government communication in a security context,” he explained.

Director-General Usman Kansong explained that the Ministry of Communication and Information will take concrete steps to follow up on the results of the workshop. One of the steps to be taken is to expand the reach of participants from other ministries and institutions, considering that this workshop was attended by only a few participants, mainly from the Ministry of Communication and Information. Although there were participants from other ministries, the Ministry of Communication and Information plans to involve more stakeholders in efforts to follow up on the results of the workshop.

Additionally, the Ministry of Communication and Information will conduct further study visits and workshops to improve understanding and capacity in managing crisis communication. Furthermore, the Ministry of Communication and Information will form a crisis communication team to deal with situations that require a quick and appropriate response from the government.

“Deputy Minister of Communication and Information Nezar Patria has instructed us to gather workshop participants, especially those from Kominfo, to implement what they have learned. Because what is most important is the execution,” said Director-General Usman Kansong,

emphasising the importance of implementing the results of the workshop. With these steps, the Ministry of Communication and Information hopes to be more prepared and responsive in managing crisis communication in the future.

The Legal Affairs Division has taken a significant step forward in addressing cybercrime with the preparation of a working draft for the Digital Safety Bill 2023, as announced by Minister Datuk Seri Azalina Othman Said in the Prime Minister’s Department (Law and Institutional Reform). This draft, serving as an initial framework, aligns with the vision of Prime Minister Datuk Seri Anwar Ibrahim and aims to keep pace with evolving technological landscapes.

Image credits: Bernama

Azalina revealed these developments during the Working Committee Meeting on the Drafting of New Laws Related to Cybercrime No. 2/2024, co-chaired by her and Communications Minister Fahmi Fadzil at the Parliament building.

Stressing the necessity of specific procedural legislation to tackle existing and potential challenges posed by technological advancements, Azalina highlighted the imperative to prepare for the continuous evolution of artificial intelligence (AI) technology to maintain a proactive stance against cyber threats.

The meeting, attended by Deputy Communications Minister Teo Nie Ching and Deputy Minister in the Prime Minister’s Department (Law and Institutional Reform) M. Kulasegaran, underscores the government’s commitment to enhancing cybersecurity measures and ensuring the safety and integrity of digital spaces in the nation.

On June 15 last year, Prime Minister Datuk Seri Anwar Ibrahim said that the National Cyber Security Committee agreed to expedite the formulation of the Cyber Security Bill to ensure all relevant aspects of the legislation are finalised.

Later in November, the Cabinet tentatively approved the drafting of the Cybersecurity Bill, prioritising regulatory authority and law enforcement, with Prime Minister Anwar highlighting plans to reinforce the National Cyber Security Agency (NACSA) as the primary national cybersecurity entity and implementer of the proposed legislation.

Prime Minister Anwar emphasised the bill’s aim to establish a comprehensive cybersecurity law to complement existing regulations, a sentiment conveyed by Defence Minister Datuk Seri Mohamad Hasan during a session on the Cybersecurity Bill.

The significant number of cyber incidents reported by the National Cyber Coordination and Control Center (NC4) and NACSA underscores the urgent requirement for strengthened cybersecurity protocols. In light of cyberspace’s escalating importance in national security and geopolitical realms, Prime Minister Anwar highlighted the escalating threat of cyber warfare, citing concerns over cybersecurity vulnerabilities such as information leakage, cybercrime, and the exploitation of technological weaknesses by actors with geopolitical agendas.

The Malaysia Cyber Security Strategy (MCSS) 2020-2024, comprising five core pillars, 12 strategies, and 35 action plans, outlines the nation’s cybersecurity agenda, including legislative initiatives like the Cybersecurity Bill, capacity building for cybersecurity professionals, fostering public-private collaboration, and enhancing international relations.

Minister Azalina Said, in collaboration with Communications Minister Fahmi Fadzil, spearheaded a crucial Working Committee Meeting on Cybercrime Legislation Drafting in Kuala Lumpur in February this year. Attended by representatives from multiple ministries and agencies, this gathering underscored the government’s concerted effort to address cyber threats comprehensively.

During the meeting, Minister Azalina emphasised the imperative need for new legislation to combat cybercrime effectively, aligning with the Madani government’s commitment to bolstering cybersecurity measures nationwide. With the pervasive influence of online services in modern life, she highlighted the escalating threat posed by cybercrime and advocated for proactive strategies to mitigate its impact.

Against the backdrop of Malaysia’s existing legal framework governing cybersecurity, including laws such as the Computer Crimes Act 1997 and the Communications and Multimedia Act 1998, Minister Azalina stressed the necessity of the Cyber Security Bill. This proposed legislation seeks to establish a robust legal framework to safeguard digital infrastructure and protect citizens’ online activities in the face of evolving cyber threats.

The National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA) have teamed up to release a comprehensive guide aimed at bolstering cloud security measures for organisations. Titled “Top Ten Cloud Security Mitigation Strategies,” this initiative aims to equip cloud customers with essential practices to enhance the security of their data as they migrate to cloud environments.

In an era where digital transformation is accelerating, the migration of data and operations to cloud platforms has become commonplace. However, this transition brings with it a myriad of security concerns, as evidenced by the increasing frequency of cyberattacks targeting cloud infrastructure. Recognising the critical need to address these challenges, the NSA and CISA have collaborated to compile a set of ten cybersecurity information sheets (CSIs), each focusing on a different aspect of cloud security.

One of the primary themes emphasised in the report is the importance of upholding the cloud-shared responsibility model. This model delineates the responsibilities between cloud service providers and their customers regarding security measures. By understanding and adhering to this model, organisations can ensure that they are taking appropriate steps to safeguard their data within the cloud environment.

Another key area highlighted in the report is the implementation of secure identity and access management practices. Proper management of user identities and access controls is essential for preventing unauthorised access to sensitive data stored in the cloud. Through robust authentication mechanisms and access policies, organisations can fortify their defences against potential security breaches.

In addition, the report emphasises the critical importance of implementing secure key management practices, robust encryption mechanisms, and effective network segmentation strategies within cloud environments. These measures play a pivotal role in protecting data both when it is stored and when it is being transferred, thereby reducing the likelihood of data breaches and unauthorised interception.

Furthermore, the report highlights the significance of securing data throughout its entire lifecycle in the cloud. This includes implementing stringent security measures for data storage, processing, transmission, and disposal. By doing so, organisations can effectively protect their data against a wide range of evolving threats.

Another critical aspect covered in the report is the defence of continuous integration/continuous delivery (CI/CD) environments. As organisations increasingly adopt DevOps practices and automate their software development processes, securing CI/CD pipelines becomes paramount to prevent the introduction of vulnerabilities and malicious code into production environments.

Moreover, the report emphasises the enforcement of secure automated deployment practices through infrastructure as code (IaC). By treating infrastructure as code and automating deployment processes, organisations can ensure consistency, repeatability, and security in their cloud environments.

The complexities introduced by hybrid cloud and multi-cloud environments are also addressed in the report. As organisations adopt hybrid and multi-cloud strategies to meet their diverse needs, they must navigate the unique security challenges posed by these environments effectively.

Additionally, the report highlights the risks associated with managed service providers (MSPs) in cloud environments. While MSPs offer valuable services and expertise, organisations must be vigilant in vetting and managing their relationships with MSPs to mitigate potential security risks.

The report stresses the importance of managing cloud logs for effective threat hunting. By aggregating and analysing logs generated by cloud services, organisations can proactively identify and respond to security incidents before they escalate.

The “Top Ten Cloud Security Mitigation Strategies” initiative by the NSA and CISA provides invaluable guidance to organisations seeking to enhance the security of their data in cloud environments. The NSA and CISA envision these strategies as foundational advice that every cloud customer should follow to mitigate the risks associated with cloud services. By implementing these strategies effectively, organisations anywhere can mitigate risks and bolster their defences against cyber threats in an increasingly digital landscape nowadays.

The Singapore Police Force (SPF) urges the public to safeguard their SingPass credentials. Scammers have been posting fraudulent job offers online, requesting SingPass details under the guise of job screening. Since January 1, 2024, at least 47 individuals have fallen victim to such schemes, often encountering these offers on platforms like Telegram or WhatsApp.

Victims are instructed to change their SingPass email and phone number, provide their password, and share NRIC screenshots. Scammers then exploit this information to register multiple bank accounts or obtain profile data for illicit purposes.

Image credits: Adapted from Annual Scams and Cybercrime Brief 2023

Despite a nearly 50% rise in scam cases to 50,376 in 2023 from 33,669 in 2022, proactive cybersecurity measures by the Singapore Police Force against scams and cybercrime are yielding positive results and there is good news financially. Despite more cases, the total amount lost decreased slightly by 1.3% to $651.8 million in 2023 from $660.7 million in 2022, marking the first decline in five years and indicating progress in scam prevention efforts.

Additionally, the average amount lost per scam case dropped significantly, from $20,824 in 2022 to $13,999 in 2023, a decrease of about 32.8%. Notably, 55.6% of scam cases reported losses less than or equal to $2,000, suggesting improved resilience against scams among the populace.

This positive trajectory can be attributed to the collaborative efforts of various agencies and stakeholders, including the Singapore Police Force (SPF), the Infocomm Media Development Authority (IMDA), Cyber Security Agency of Singapore (CSA), Smart Nation Group (SNG), Monetary Authority of Singapore (MAS), and private sector partners. Their coordinated actions aimed at preventing scams and raising public awareness have contributed significantly to mitigating losses and empowering individuals to protect themselves.

While challenges persist, particularly in scams involving social engineering and deception via social media platforms, individual vigilance remains crucial. By staying informed, exercising caution, and leveraging the resources provided by government agencies and stakeholders, individuals can fortify their defences against evolving cyber threats.

The breakdown of scam types reveals a concerning trend, with job scams, e-commerce scams, fake friend call scams, phishing scams, and investment scams dominating the landscape. However, heightened awareness and concerted efforts are driving progress in scam prevention, offering hope for a safer digital environment for all.

Singapore Police Force has significantly escalated its efforts to counter the rising threat of scams and cybercrime, employing a multifaceted approach encompassing enforcement, engagement, and education. SPF’s strategy relies on strong public-private partnerships, particularly through the Anti-Scam Command (ASCom), collaborating with over 100 institutions like banks, fintech companies, and e-commerce platforms. This facilitates swift freezing of accounts and fund recovery, reducing victim losses. Additionally, SPF conducts targeted enforcement operations against scam tactics, resulting in the termination of thousands of phone lines and the apprehension of fraudsters.

SPF collaborates with foreign law enforcement agencies to dismantle transnational scam syndicates, leading to successful joint operations and arrests of perpetrators. Participation in internationally coordinated operations like INTERPOL’s Operation First Light and Operation HAECHI showcases SPF’s global commitment to combating scams.

Alongside enforcement, SPF proactively prevents scams through initiatives like Project A.S.T.R.O., which sends SMS alerts to potential victims, helping them recognise and avoid scams. Outreach programs target various groups, like migrant workers and the elderly, raising awareness and empowering communities to report scams.

Education is vital in SPF’s anti-scam efforts. The Scam Public Education Office (SPEO) leads public awareness campaigns and shares anti-scam resources. Platforms like the ScamShield app and the Add, Check, Tell framework empower individuals to protect themselves against scams. Additionally, collaborative efforts with content creators and organisations enhance anti-scam messaging, fostering a collective response against scams.

SPF’s holistic strategy underscores its commitment to protecting the community from scams and cybercrime. Through collaborative cybersecurity initiatives, there’s been a decrease in financial losses despite an increase in scams, demonstrating improved resilience and public safety through multifaceted approaches in compliance, caution and awareness.

Entrepreneur of the Year 2017 - OpenGov Asia

He overcame all obstacles to reach his destination: Entrepreneur of the year (2017). Listen to our leader’s inspiring story.

PARTNER

Qlik’s vision is a data-literate world, where everyone can use data and analytics to improve decision-making and solve their most challenging problems. A private company, Qlik offers real-time data integration and analytics solutions, powered by Qlik Cloud, to close the gaps between data, insights and action. By transforming data into Active Intelligence, businesses can drive better decisions, improve revenue and profitability, and optimize customer relationships. Qlik serves more than 38,000 active customers in over 100 countries.

PARTNER

CTC Global Singapore, a premier end-to-end IT solutions provider, is a fully owned subsidiary of ITOCHU Techno-Solutions Corporation (CTC) and ITOCHU Corporation.

Since 1972, CTC has established itself as one of the country’s top IT solutions providers. With 50 years of experience, headed by an experienced management team and staffed by over 200 qualified IT professionals, we support organizations with integrated IT solutions expertise in Autonomous IT, Cyber Security, Digital Transformation, Enterprise Cloud Infrastructure, Workplace Modernization and Professional Services.

Well-known for our strengths in system integration and consultation, CTC Global proves to be the preferred IT outsourcing destination for organizations all over Singapore today.

PARTNER

Planview has one mission: to build the future of connected work. Our solutions enable organizations to connect the business from ideas to impact, empowering companies to accelerate the achievement of what matters most. Planview’s full spectrum of Portfolio Management and Work Management solutions creates an organizational focus on the strategic outcomes that matter and empowers teams to deliver their best work, no matter how they work. The comprehensive Planview platform and enterprise success model enables customers to deliver innovative, competitive products, services, and customer experiences. Headquartered in Austin, Texas, with locations around the world, Planview has more than 1,300 employees supporting 4,500 customers and 2.6 million users worldwide. For more information, visit www.planview.com.

SUPPORTING ORGANISATION

SIRIM is a premier industrial research and technology organisation in Malaysia, wholly-owned by the Minister​ of Finance Incorporated. With over forty years of experience and expertise, SIRIM is mandated as the machinery for research and technology development, and the national champion of quality. SIRIM has always played a major role in the development of the country’s private sector. By tapping into our expertise and knowledge base, we focus on developing new technologies and improvements in the manufacturing, technology and services sectors. We nurture Small Medium Enterprises (SME) growth with solutions for technology penetration and upgrading, making it an ideal technology partner for SMEs.

PARTNER

HashiCorp provides infrastructure automation software for multi-cloud environments, enabling enterprises to unlock a common cloud operating model to provision, secure, connect, and run any application on any infrastructure. HashiCorp tools allow organizations to deliver applications faster by helping enterprises transition from manual processes and ITIL practices to self-service automation and DevOps practices. 

PARTNER

IBM is a leading global hybrid cloud and AI, and business services provider. We help clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 3,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM’s hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM’s breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM’s legendary commitment to trust, transparency, responsibility, inclusivity and service.