Earlier this year, OpenGov spoke to Adjunct Professor David Watts, Commissioner for Privacy and Data Protection for the State of Victoria in Australia to learn about the Victorian Protective Data Security Framework.
In July, Commissioner Watts was appointed by United Nations (UN) Special Rapporteur on the Right to Privacy’s lead on a Big Data and Open Data study. OpenGov caught up with him to discuss his new role, and the objectives and scope of the study. Commissioner Watts talked about the importance of defining the terms, studying the risks and mitigation strategies.
Can you tell us about your new role as the lead on the UN global study on Big Data and Open Data?
Let me start with some background. The UN Human Rights Council’s Special Rapporteur on the Right to Privacy, Professor Joe Cannataci, has a mandate to “raise awareness concerning the importance of promoting and protecting the right to privacy, including with a view to particular challenges arising in the digital age, as well as concerning the importance of providing individuals whose right to privacy has been violated with access to an effective remedy, consistent with international human rights obligations”.
In his first report to the Human Rights Council in March 2016, the SRP announced his intention to focus on a number of significant privacy themes. One of those being Big Data and Open Data. The aim of the project I am leading is to produce a report on Big Data and Open Data for the Special Rapporteur for presentation to the UN Human Rights Council and General Assembly in the latter part of 2017.
We have divided the Big Data/ Open Data theme into a number of areas of inquiry. The first area of inquiry relates to how best to frame the issues. The starting point is to define both Big Data and Open Data.
Interestingly, there is no agreed definition of Big Data. There are only descriptions. People talk about the 3, 4 or 5 Vs. When last I counted it was 8 Vs. Even the National Institute of Standards and Technology (NIST) in the US which attempted a definition, only arrived at a description. The lack of a definition is a coneptual issue.
Defining Open Data is easier. The definition that I find most useful is that Open Data is data that can be freely used, reused and redistributed by anyone, subject only at most to a requirement to attribute and to share alike.
Open Data has become a mantra for many governments, including UK, USA, Australia and Singapore. They believe it will stimulate research and development, drive innovation and generate knowledge to improve society. But there are risks. One of the most significant is the risk of re-identification of deidentified data. For example, Australia’s Department of Health released over a billion lines of “deidentified” personal health data a few weeks ago but had to withdraw it shortly afterwards because Melbourne University researchers re-identifed it.
How would success be evaluated?
The extent to which the paper provides the global community with a basis for a sensible and informed debate.
By our ability to produce a report that defines and demystifies the issues, makes sense of them, that identifies the risks and opportunities and points to solutions.
Who are the major stakeholders you would want to involve in this?
The Special Rapporteur has indicated that he wishes to be everybody’s Special Rapporteur. He has indicated he will be inclusive and is interested in all points of view. So, contributions from civil society, the academy, government, the private sector, non-government organisations, special interest-groups like health consumers all have an important role to play.
Broad engagement is needed. Often the debates around privacy become Euro-centric or US-centric. It is very important to understand Asian perspectives.
The Asia-Pacific region is growing very rapidly and is frequently at the forefront of developing new services, new ICT technologies and new approaches to data.
What is the scope of the report?
The report will be divided into parts:
- Framing the issues –Defining Big Data, Open Data , Structured and unstructured data
- Technologies and processes – Analytics, what the process is for management and analysis, and interpretation, algorithms, data linkage, distributed ledger, privacy enhancing technologies etc.
- Participants – public sector, private sector, Information intermediaries and what each seeks to do with Big Data and Open Data
- Value – public value, private value and individual value
- The Ethical, legal and regulatory context
- Addressing the risks
The difficulty with the subject matter is that it moves so fast. This means that our understanding of new technologies must constantly evolve. One issue I have noticed is that much of the discourse about big data is contested A key task will be to sift what is authoritative from what is not.
What are some of the key benefits and risks in your view?
Although some say that big data’s benefits are unproven, there appears to be a broadening consensus that it can produce insights: for example evidence relevant to some of our most complex social and economic problems such as climate change, healthcare, and to address serious crime and corruption.
In Africa, mobile telephone records were used to track Ebola. Frequently, the disease vectors could be tracked more effectively this way than by reports from healthcare facilities.
The challenge is how to harness these benefits while protecting privacy. We need to ensure that these techniques do not become so intrusive that we have no personal and private space to ourselves. Drawing global attention to these issues is an important part of the project.
As I mentioned earlier, one of the most controversial issues regarding Big Data – Open Data is whether personal informaiton can ever be re-identified.
If you assemble various government data sets, or combine them with other commercially purchased data, then there is a significant risk of the data being re-identified. Blanket assertions that released data has been de-indentified are problematic. The projects de-identification focus will be evidence-based to provide an opportunity to test their claims of de-identification solutions.
Can you tell us about some of the mitigation measures you are looking at?
Non-exhaustively, we are looking at Privacy Enhancing Technologies, de-identification, differential privacy, distributed ledger technology, the semantic web1, and other technological approaches together with security controls, legal controls at the national and international level, business processes, policies and standards, frameworks, governance, structures, capability, transparency and resourcing.
I am sceptical about technological solutions that assert that they are going to be perfect in every circumstance.
For example, at the moment the jury is out over whether differential privacy works. Differential privacy involves the preservation of data sets, sufficient to answer queries accurately, but reducing the likelihood of disclosing personal information. Some say that the noise added to the data makes it useless. Others say that the noise that is added to the data is protective of personal information. Which claims stand up to scrutiny?
The Semantic web can be used to constitute permissions that would accompany information, from which the automation of informaiton transactions can occur. Perhaps personal information coud be stored in the equivalent of an electronic envelope, associated with an application that can negotiate how the information can be used through smart contracting.
Distributed ledger technology has been mooted as being able to protect privacy and security. Distributed ledger implementations such as blockchain show good potential to make transactions accountable, transparent and secure to anyone who can read the blocks in the chain. But in many cases you may not want your private personal information to be transparent. Maybe there is an encryption solution for this. These are early days. My feeling is, it is a technology that has a range of potential benefits, probably some of them haven’t been invented yet. Again, we need to examine the claims and test them.
Technology moves quickly. So, to underpin regulatory and oversight efforts on one or a suite of technologies seems perilous to me. I think technologies can be used as ways to implement broader policies and regulatory frameworks. It’s important to have principles, standards, frameworks and structures that are risk-based and technology neutral.
What are your thoughts on data ownership?
When different professions and disciplines communicate, they face terminology problems. As a lawyer I understand confidentiality to mean a certain thing. But security professionals have a slightly different meaning and policy professionals have yet another meaning.
Data ownership is one of those expressions. Legally the term doesn’t make sense. There is no property right in information per se. ICT professionals understand data ownership as a stewardship issue that goes to who has management responsibility and who is accountable for the data.
So I think “data ownership” has to be used with care.
It’s better to manage risk against the indicators of accountability and responsibility, rather than trying to work out who owns data.
1 The Semantic Web is a set of data standards promoting common data formats and exchange protocols on the web.
Singapore’s Infocomm Media Development Authority (IMDA) has recently updated its platform known as Chief Technology Officer-as-a-Service (CTO-as-a-Service). The platform enables SMEs to self-assess their digital readiness and needs at any time and from any location, as well as access market-proven and cost-effective digital solutions and engage digital consultants for in-depth advisory and project management services.
This is for any business entity that wants to know how to start going digital, understand what type of solutions to adopt for its specific business challenge, or choose the solution that best meets its needs.
An enterprise can benefit from CTO-as-a-Service through:
- Conduct a self-evaluation of its digital readiness and pinpoint its gaps and needs in terms of digitalisation;
- Study other Small and Medium Sized Enterprises (SMEs) that have carried out digitalisation projects successfully;
- Receive digital solution suggestions based on the business’s needs and profile; and
- Evaluate the features and costs of various digital solutions.
There are more than 450 subsidised digital solutions available for selection, including those that address industry-specific or general business needs, as well as those that serve to streamline operations, increase business sales revenue, or ensure business resiliency.
The business can also work with digital consultants from the designated operators through CTO-as-a-Service, for digital advisory to assist:
- Seek a deeper comprehension of its business priorities and needs;
- Create training plans and digital solutions specifically for its businesses;
- Include fundamental data usage, protection, and cybersecurity risks in the digitalisation process.
The business may also ask digital consultants to assist with project managing the rollout of its digitalisation initiatives.
Eligible businesses can use digital advisory and project management services for free for the first time. Should the businesses want to keep using digital consultants, future usage or service enhancement will be based on commercial agreements.
Any company that satisfies the requirements below is qualified to use free project management and digital advisory services for the first time:
- Licensed and active in Singapore;
- A minimum of 30 per cent local shareholding;
- Enterprise’s group employment size is no more than 200 employees, or the group’s annual sales turnover is no more than S$100 million;
- Has never previously used CTO-as-a-Service digital consultants.
Meanwhile, SMEs are the backbone of Singapore’s economy. They employ two-thirds of the country’s workers and contribute almost half of Singapore’s GDP. Since digital technology is changing every part of Singapore’s economy, SMEs need to take advantage of digital technologies to grow and do well.
The SMEs Go Digital programme, which was started by the IMDA in April 2017, is meant to make going digital easy for SMEs. More than 80,000 SMEs have used the programme’s digital solutions.
Enterprises can also use advanced and integrated solutions to improve their capabilities, strengthen business continuity measures, and build longer-term resilience. Solutions that are supported by government agencies solve common problems at the enterprise level on a large scale, help enterprises adopt new technologies, and make it easier for enterprises to do business within or across sectors.
IMDA works with sector-led agencies and industry players to find advanced and integrated digital solutions that can be supported and are relevant to their sectors. Companies that want to use these solutions can check the IMDA website to find out when they can apply for each one.
Costs for hardware, software, infrastructure, connectivity, cybersecurity, integrations, development, improvement, and project management can be covered by funding support. With this, the agency has kept helping businesses, and the list of solutions that are supported will grow, with an emphasis on AI-enabled and cloud-based solutions.
Taiwan City Science Lab @ Taipei Tech demonstrated a series of cutting-edge AI applications. The lab exhibit advanced AI applications and their research and development results, such as the mobile robot, a AI robotic fish and Campus Rover.
The cross-disciplinary R&D and teaching laboratory aims to be a global technology and talent exchange platform. Massachusetts Institute of Technology (MIT) and Taipei Tech are coming together to jointly established City Science Lab @ Taipei Tech.
“Through developing advanced AI technology and big data system, we plan to make Taiwan the island of high-end technology,” said Yao Leehter, Taipei Tech Chair Professor of the Department of Electrical Engineering.
Yao indicated that Taipei Tech alums highly support the lab. The lab also collaborates with Kent Larson, the leader of MIT City Science Lab, the City Science Lab @ Taipei Tech aims to be an international platform for technology and talent exchange.
Taipei Tech adopts and jointly promotes with MIT to implement the Undergraduate Scientific Research Programme. Known as UROP, the programme provides sufficient resources for students and cultivates a new generation of scientific researchers. The collaboration was initially rolled out in 1969 by MIT’s first President, William Rogers.
For students to learn the most modern and state-of-the-art technology applications, the lab provides advanced equipment for R&D purposes, such as mobile robots. The agile, mobile robot can adapt to complex terrains and is equipped with LIDAR, infrared, and stereo vision sensors, which can draw 3D point cloud maps in real-time and detect and dodge obstacles. The mobile robot is used in decommissioned nuclear power plants, factories, construction sites, and offshore drilling oil platforms. Another mobile robot use case is for patrol, troubleshooting, and leak detection.
In addition, the lab also showcased its R&D results which are the AI robotic fish to the advanced instrumental equipment. The robotic fish is a streamlined robot designed to resemble a real fish. The fish robot comprehends and mimics the motion model of swimming fish through machine learning.
The robot can swim underwater in a simulated way. To perfectly mimic the fish movement, researchers have spent significant time collecting massive movement data from real fish, documenting, and analysing the swimming performance. Afterwards, they utilised AI technology and programme coding to control the motoric movement of the robotic fish.
The team then spent a year adjusting the robotic fish to make the swim movement look like a real fish. Machinery fish propulsion efficiency and excellent swimming performance are considered one of the most critical subjects in bionics.
“The robotic fish is useful for biological research and can also be used to carry out underwater operations and examine water quality,” said Yao.
Recently, the fish robot was involved in movie production. During the designing process, the production house team suggested adding a “cloth” on the fish with fish skin and fish scale to make it more lifelike. The company also came up with the idea to use a magnet to stick the fish scale on the body of the robotic fish. Taiwan Textile Research Institute and the local design research group joined the brainstorming and production process to finish the golden fish’s final look onscreen.
Moreover, The Campus Rover, developed by the team of Professor Yao in cooperation with the Taipei Tech Department of Industrial Design, demonstrated practical AI applications in real life. For example, campus or express hospital service can use the self-charging robot to ensure delivery safety.
In a process that could be compared to travelling through a wormhole, researchers from the Massachusetts Institute of Technology, California Institute of Technology, Harvard University, and other institutions sent quantum information across a quantum system. The Sycamore quantum processor device was used in this experiment, which pave the way for more quantum computer research into gravitational physics and string theory in the future.
Calculations from the experiment showed that qubits moved from one system of entangled particles to another in a model of gravity, even though this experiment didn’t produce a disruption of physical space and time in the sense that might understand the term “wormhole” from science fiction.
A wormhole connects two far-off regions of spacetime. Nothing is allowed to travel through the wormhole in the general theory of relativity. But in 2019, some scientists hypothesised that an entangled black hole-created wormhole might be passable.
By introducing a direct interaction between the distant spacetime regions and using a straightforward quantum dynamical system of fermions, physicists have discovered a quantum mechanism to make wormholes traversable. This type of “wormhole teleportation” was also created by researchers using entangled quantum systems, and the outcomes were confirmed using classical computers.
In this experiment, researchers used the Sycamore 53-qubit quantum processor to teleport a quantum state from one quantum system to another to send a signal “through the wormhole.” The research team had to find entangled quantum systems that behaved as predicted by quantum gravity while also being small enough to run on current-generation quantum computers.
Finding a simple enough many-body quantum system that maintains gravitational properties was a key challenge for this work. The team gradually reduced the connectivity of highly interacting quantum systems using machine learning (ML) techniques to accomplish this. Each example of a system with behaviour that is consistent with quantum gravity that emerged from this learning process only needed about 10 qubits, making it the ideal size for the Sycamore processor.
It was crucial to find such tiny examples because larger systems with hundreds of qubits would not have been able to function on the quantum platforms currently in use. The team observed the same information on the other 10-qubit quantum system on the processor after inserting a qubit into one system and sending an energy shockwave across the processor after doing so.
Depending on whether a positive or negative shockwave was applied, the team measured how much quantum information was transferred between two quantum systems. The researchers demonstrated that a causal path between the two quantum systems can be established if the wormhole is kept open for enough time by the negative energy shockwaves. It is true that the qubit that was inserted into one system also appears in the other.
The team then used conventional computer calculations to confirm these and other properties. Running a simulation on a traditional computer is not like this. A conventional simulation, which involves the manipulation of classical bits, zeros, and ones, cannot create a physical system, even though it is possible to simulate the system on a classical computer and this was done as described in this paper.
Future quantum gravity experiments could be conducted using more advanced entangled systems and larger quantum computers because of this new research. This research does not replace direct observations of quantum gravity, such as those obtained through the Laser Interferometer Gravitational-wave Observatory’s detection of gravitational waves.
The Counter Ransomware Task Force (CRTF), which was formed to bring together Singapore Government agencies from various domains to strengthen Singapore’s counter-ransomware efforts, has issued its report.
Singapore’s efforts to promote a resilient and secure cyber environment, both domestically and internationally, to combat the rising ransomware threat are guided by the recommendations in the CRTF report.
According to David Koh, Commissioner of Cybersecurity, Chief Executive of CSA and Chairman of the CRTF, ransomware poses a threat to both businesses and individuals. Economically, socially, and even in terms of national security, it can be detrimental. Both internationally and across domains, ransomware is a problem.
“It requires us to collaborate and draw on our knowledge in a variety of fields, including cybersecurity, law enforcement, and financial supervision. It also necessitates that we work with like-minded international partners to identify a common problem and develop solutions,” David explains.
He exhorts businesses and individuals to contribute as well, strengthening the nation’s overall defence against the ransomware scourge.
Cybercriminals use malicious software known as ransomware. When ransomware infects a computer or network, it either locks the system or encrypts the data on it. For the release of the data, cybercriminals demand ransom money from their victims.
A vigilant eye and security software are advised to prevent ransomware infection. Following an infection, malware victims have three options: either they can pay the ransom, attempt to remove the malware, or restart the device.
Extortion Trojans frequently employ the Remote Desktop Protocol, phishing emails, and software vulnerabilities as their attack vectors. Therefore, a ransomware attack can target both people and businesses.
The ransomware threat has significantly increased in scope and effect, and it is now a pressing issue for nations all over the world, including Singapore.
The fact that attackers operate internationally to elude justice makes it a global issue. Ransomware has created a criminal ecosystem that offers criminal services ranging from unauthorised access to targeted networks to money laundering services, all fed by illicit financial gains.
Singapore must approach the ransomware issue as a cross-border and cross-domain problem if it is to effectively combat the ransomware threat.
Other nations should adopt comparable domestic measures to coordinate their financial regulatory, law enforcement, and cybersecurity agencies to combat the ransomware issue and promote international cooperation.
Three significant results were the culmination of the CRTF’s work. For government agencies to collaborate and create anti-ransomware solutions, they first developed a comprehensive understanding of the ransomware kill chain.
Second, it examined Singapore’s stance on paying ransom to cybercriminals. Third, for the government to effectively combat ransomware, the CRTF suggested the following policies, operational plans, and capabilities under four main headings:
Pillar 1: Enhances the security of potential targets (such as government institutions, critical infrastructure, and commercial organisations, especially small and medium-sized businesses) to make it more difficult for ransomware attackers to carry out successful attacks.
Pillar 2: To lower the reward for ransomware attacks, disrupt the ransomware business model.
Pillar 3: To prevent ransomware attack victims from feeling pressured to pay the ransom, which feeds the ransomware industry, support recovery.
Pillar 4: Assemble a coordinated international strategy to combat ransomware by cooperating with international partners. Singapore should concentrate on and support efforts to promote international cooperation in three areas that have been identified by the CRTF: law enforcement, anti-money laundering measures, and discouraging ransom payments.
The appropriate government agencies will take the recommendations of the CRTF under consideration for additional research and action.
An international team led by The Chinese University of Hong Kong (CUHK)’s Faculty of Medicine (CU Medicine) has successfully developed the world’s first artificial intelligence (AI) model that can detect Alzheimer’s disease solely through fundus photographs or images of the retina. The model is more than 80% accurate after validation.
Fundus photography is widely accessible, non-invasive and cost-effective. This means that the AI model incorporated with fundus photography is expected to become an important tool for screening people at high risk of Alzheimer’s disease in the community. Details have been published in The Lancet Digital Health under the international journal The Lancet.
Limitations of Alzheimer’s disease current detection methods
In Hong Kong, 1 in 10 people aged 70 or above suffers from dementia, with more than half of those cases attributed to Alzheimer’s disease. This disease is associated with an excessive accumulation of abnormal amyloid plaque and neurofibrillary tangles in the brain, leading to the death of brain cells and resulting in progressive cognitive decline.
The Clinical Professional Consultant of the Division of Neurology in CU Medicine’s Department of Medicine and Therapeutics stated that memory complaints are common among middle-aged and elderly people, and are often considered a sign of Alzheimer’s disease.
It is sometimes difficult to make an accurate diagnosis of Alzheimer’s disease based on cognitive tests and structural brain imaging. However, methods to detect Alzheimer’s pathology, such as an amyloid-PET scan or testing of cerebrospinal fluid collected via lumber puncture, are invasive and less accessible.
To address the current clinical gap, CU Medicine has led several medical centres and institutions from Singapore, the United Kingdom and the United States to successfully develop an AI model using state-of-the-art technologies which can detect Alzheimer’s disease using fundus photographs alone.
Studying disorders of the central nervous system via the retina
The S.H. Ho Professor of Ophthalmology and Visual Sciences and Chairman of CU Medicine’s Department of Ophthalmology and Visual Sciences explained that the retina is an extension of the brain in terms of embryology, anatomy and physiology. In the entire central nervous system, only the blood vessels and nerves in the retina allow direct visualisation and analysis.
Thus, it is widely considered a window through which disorders in the central nervous system can be studied. Through non-invasive fundus photography, a range of changes in the blood vessels and nerves of the retina that are associated with Alzheimer’s disease can be detected.
The team developed and validated their AI model using nearly 13,000 fundus photographs from 648 Alzheimer’s disease patients (including patients from the Prince of Wales Hospital) and 3,240 cognitively normal subjects. Upon validation, the model showed 84% accuracy, 93% sensitivity and 82% specificity in detecting Alzheimer’s disease. In the multi-ethnic, multi-country datasets, the AI model achieved accuracies ranging from 80% to 92%.
Accessibility, non-invasiveness and high cost-effectiveness of the AI model using fundus photography help the detection of Alzheimer’s cases both in the clinic and the community
A Professor of Medicine and Director of the Therese Pei Fong Chow Research Centre for Prevention of Dementia at CU Medicine stated that in addition to its accessibility and non-invasiveness, the accuracy of the new AI model is comparable to imaging tests such as magnetic resonance imaging (MRI).
It shows the potential to become not only a diagnostic test in clinics but also a screening tool for Alzheimer’s disease in community settings. Looking ahead, the team aims to validate its efficacy in identifying high-risk cases of the disease hidden in the community, so that various preventive treatments such as anti-amyloid drugs can be initiated early to slow down cognitive decline and brain damage.
The Associate Professor in the Department of Ophthalmology and Visual Sciences at CU Medicine said that in addition to applying novel AI technologies in the model, the team also tested it in different scenarios. Notably, their AI model retained a robust ability to differentiate between subjects with and without Alzheimer’s disease, even in the presence of concomitant eye diseases like macular degeneration and glaucoma which are common in city-dwellers and the older population.
Their results further support the hypothesis that the team’s AI analysis of fundus photographs is an excellent tool for the detection of memory-depriving Alzheimer’s disease. To move this research towards clinical application, the team is developing an integrated, AI-based platform to combine information from both blood vessels and nerves of the retina captured by fundus photography and optical coherence tomography for the detection of Alzheimer’s disease. Their findings should provide more evidence to move AI from code to the real world.
The Ministry of Information and Communications (MIC) announced it would roll out Internet advertising management measures at a conference in Hanoi earlier this week. Participants at the event discussed how advertising in cyberspace has become the norm. Domestic and foreign firms choose it because it is easier to access customers and it offers flexible costs and larger reach. However, the limited management of ads poses potential risks to the safety of brands, the Ministry has said.
According to a press release by MIC, ad agents affirmed that without the cooperation of cross-border platforms in modifying algorithms to filter and censor content, ad violations will remain rampant. The Ministry will penalise agents and brands that cooperate with platforms that do not fall in line with MIC regulations. On the other hand, the Ministry will support ads on domestic and foreign digital platforms that comply with domestic laws, MIC’s Deputy Minister, Nguyen Thanh Lam, noted. This will protect brands and build a healthy, safe, and fair ad business environment.
The Ministry will also increase inspection and clampdown on violations of Internet ads activities, he said. Cross-border ad firms that fail to comply with Vietnam’s laws will not be allowed to operate in the country. MIC has also generated a Whitelist consisting of licensed e-newspapers, magazines, general information websites, and social media. Other websites, registered accounts, and information channels are also in the pipeline for the list, the release said. The list will be publicised on the portals of the Ministry and Authority of Broadcasting and Electronic Information. Ad service providers, agents, and brands were also urged to use the list for their work.
Nearly 80% of the population in Vietnam are digital consumers, as OpenGov Asia reported earlier in October. Over the past year, the average contribution of e-commerce to total retail has continued to grow at 15%. Higher than growth in India (10%) and China (4%), with an online-to-total retail share of 6%. Now that the world is in the post-pandemic stage, regional consumers are prioritising an integrated shopping experience, combining online and in-person services. During the ‘discovery’ phase of their shopping, 84% of Vietnamese shoppers use the Internet to browse and find items. This is a period when they use more platforms than ever before, with the dominance of the e-commerce market accounting for 51% of online spending.
At the same time, social networking sites account for nearly half of online discoveries, including images (16%), social media videos (22%), and related tools such as messaging (9%). These tools were paramount channels for 44% of survey respondents. Consumers’ openness to interaction and experimentation has also led to behavioural changes, with 64% of respondents saying they have interacted with a business account in the past year. As customers seek more engagement, the content creation economy is able to grow exponentially.
In the context of digital consumption, Vietnamese users switch brands more often and increase the number of platforms they use to find a better value, with 22% of online orders made on various e-commerce platforms. The number of online platforms Vietnamese consumers use has doubled from 8 in 2021 to 16 in 2022. Therefore, it is important to put in place proper ad regulations as Internet usage grows.
The Indonesian government disclosed four potential uses of Big Data and AI to improve its e-government programmes. These two technologies, they feel, have the potential to support disaster identification and preventive action, prevention of illegal activities and cyber-attacks and increase workforce effectiveness.
The Director General of Informatics Applications, Semuel A. Pangerapan, explained several scenarios for Big Data. According to him, the government can use Big Data to improve critical event management and the quality of the response by identifying problem points through Big Data Analytics. For example, the agencies can be better prepared to prevent and mitigate natural disasters such as drought, epidemics or massive accidents occur.
In addition, Big Data can also enhance the government’s ability to prevent money laundering and fraud through better surveillance to detect such illegal activities.
Furthermore, Big Data significantly reduces the possibility of cyber-attacks. Cyber-attacks can come from external parties, data leaks or internally for a variety of reasons. An analysis of patterns and unusual activities can help in preventing or managing such cyber issues.
Big Data and analytics can contribute to workforce effectiveness by increasing monitoring. In addition, it can be used for policy design, decision-making and gaining insights.
Semuel stressed the importance of data analysis after collecting all data in the right fashion. Data is only valuable if it is collected correctly and then analysed – data will only provide benefits if processed in the right way. “In its implementation, AI helps analyse existing Big Data, providing data understanding or insight to help make decisions,” he explained.
Another advantage of AI is the ability to speed up new implementation services and corrections in real-time. At the evaluation stage, AI can also provide suggestions for adjustments and improvements to subsequent policies.
Currently, the encourages the improvement of the quality of Big Data and AI innovation through the development of e-government. The Indonesian government is also open to third parties to accelerate Big Data and AI use.
E-government has made progress in recent years and received appreciation from the United Nations in 2020. The UN said that Indonesia’s e-government development index rose to rank 88 from previously ranked 107 in 2018. Indonesia’s e-participation index has also increased from rank 92 in 2018 to 57 in 2022.
“The two rankings show an increase in the quality of Indonesia’s e-government and the level of community activity in using e-government services,” said Semuel.
However, the government faced challenges in implementing these two technologies. Overlapping and data replication is one of the main problems. “Regulatory obstacles in the procurement of government Big Data infrastructure also need to be overcome. Then compliance with international standards for the national Big Data ecosystem is also still the government’s homework.”
To optimise AI use, Semuel emphasised the need for a skilled workforce, regulations governing the ethics of using AI, infrastructure, and industrial and public sector adoption of AI innovations.
The government is implementing several solutions to overcome challenges. First, they have provided suitable facilities in the form of National Data Centres (NDCs) in four separate locations. The NDCs will accommodate Government Cloud and contain national data across sectors.
Optimisation of data centre utilisation needs to be supported by staff with qualified expertise. For this reason, the government is holding digital skills training on AI and Big Data through the Digital Talent Scholarship (DTS) and Digital Leadership Academy (DLA) programs.
Apart from facilities and upskilling, Indonesia is looking to develop a business ecosystem that utilises AI and Big Data. Support for this comes from the National Movement of 1000 Digital Startups, Startup Studio Indonesia (SSI) and HUB.ID.