Search
Close this search box.

We are creating some awesome events for you. Kindly bear with us.

EXCLUSIVE – Artificial intelligence in government, education and healthcare – Current landscape and future potential

EXCLUSIVE - Artificial intelligence in government

The field of AI has reached an inflection point today, where it is on the cusp of revolutionising areas as diverse as security, finance, transport, healthcare and government service delivery. Availability of massive volumes of data, relatively inexpensive computational capabilities and improved training techniques, such as deep learning, have led to significant leaps in AI capabilities and will only continue to do so for the foreseeable future.

The pace is accelerating and governments need to figure out how to deal with this era of AI 2.0, where AI is becoming all-pervasive. Where if they want to unlock the potential of the data being generated at an ever-increasing velocity, government departments need AI at their fingertips, in the here and now.

On September 14, senior executives from a wide range of key public sector agencies in Singapore and institutes of higher learning gathered for a vibrant, insightful discussion on the next stage of artificial intelligence.

Mohit Sagar, Editor-in-Chief of OpenGov Asia kicked off the discussion talking about the varying adoption level of AI across the Asia-Pacific region.

Marc Sultzbaugh (above), Senior VP, Worldwide Sales, Mellanox Technologies spoke about the use of AI in Formula 1, right from the design stage to analysing race tactics and strategy to analysing weather conditions on the day of the race and making in-race decisions. He also spoke about the trend of providing more AI capabilities on-premise, and not just through public cloud infrastructure. He highlighted three factors driving AI development, improved price-performance of process and storage networking, availability of more and higher quality data and open-source software driven infrastructure driven by major hyperscale users.

The AI landscape- past and present

Professor Zhang Chengqi (below- standing), Distinguished Professor of Information Technology at the University Technology of Sydney (UTS), Australia also talked about big data as one of the primary factors which has led to the current renaissance in AI. The other two factors he mentioned were cloud computing and deep learning. Data availability and computing speeds lie behind recent headline-grabbing applications, from AlphaGo to facial recognition and understanding language. AI is very much present in the real world today and not just in labs.

Prof. Zhang presented a brief history of AI, demonstrating that it is not as if that the AI technologies have come out of the blue during the last few years. The journey started with a grand vision of emulating human intelligence. The Turing Test proposed by Alan Turing in 1950 sought to test through interviews a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from that of a human. Around 1960, the General Problem Solver was created, which could be applied to "well-defined" problems such as proving theorems in logic or geometry, word puzzles and chess. Then the journey passed through expert systems (intelligent computer program using knowledge and inference procedures to solve problems that are difficult enough to require significant human expertise within a particular domain for their solutions) in 1980 and intelligent agents in the 1990s through to data mining and machine learning in recent years.

Prof. Zhang presented a couple of case studies regarding AI application in the arena of public policy. One was finding the optimal setting for budget expenditure on prevention for the Department of Health in Australia. If government spends more on frequent health testing it can save far bigger expenditure in the future through earlier detection of health issues. However, then there can be excessive, unnecessary testing also. This is an example of AI assisting in a policy decision. AI cannot create a policy framework, which still has to be done by humans.

The second case Prof. Zhang talked about is developing predictive capabilities for prevention of cyberbullying. Families submit complaints which are investigated by the government agencies and actions taken. This data from the government is used to train AI system (with privacy protection measures in place) which provides early warnings to families.

Assistant Professor, Erik Cambria from School of Computer Science and Engineering & College of Engineering at Nanyang Technological University (NTU) went to the basics of what AI is and whether the initial vision of achieving human intelligence has been achieved. His answer was no, because we do not yet understand how human intelligence works. Most AI applications today have a veneer of human intelligence, but it is just that, a veneer. He said that most of what has been developed till data are expert systems, rather than intelligent systems. Today’s AI still cannot compete with a 4-year-old child.

Asst. Prof. Cambria defined AI 1.0 as logic-based, symbolic (human-readable) AI, which involved creating a model of reality and ad-hoc rules in the form of if-then rules or search trees or ontologies (a formal naming and definition of the types, properties, and interrelationships of the entities that really or fundamentally exist, used to limit complexity and to organise information). It didn’t really work well because it is almost impossible to predict all possibilities that an environment can have.

AI 2.0 is on the other hand mostly based on machine learning. The previous approach was top-down, defining rules and models of reality. Knowledge and planning were modelled in data structures that made sense to the programmers that built them. Now the approach is bottom-up and data -driven. You don’t need experts to do feature extraction. You can dump massive volumes of data and the machine automatically learns and does the classification. Knowledge and planning emerge from learning models similar to neurons. So, the focus has shifted from algorithm availability to data availability.

In Asst. Prof. Cambria’s research area of natural language processing, there are problems which need something more than statistical analysis. He outlined three broad challenges with deep learning techniques in general: Dependency on data (it requires a lot of training data and is domain-dependent), black box algorithms (the reasoning process is unintelligible and non-transparent) and consistency (small change in parameters can change the results significantly).

So, what is required maybe is a hybrid approach, with symbolic and sub-symbolic AI working together, the top-down theory-driven approach helping gain transparency and data-driven deep learning enabling the automatic learning of rules. 

Discussion

Current and planned usage of AI

In response to a question on current use of AI by their organisations, around 33% of delegates responded that they have made their first forays into AI and introduced prototypes, 29% are in an evaluation stage, for 21% AI is an integral part of their technology landscape, while the remainder are currently not using or evaluating any form of AI.

Around 48% responded that they are currently working to intensify the use of AI when asked about timeframes, while 71% see the primary benefit of AI in the development of new products, servic
es and business models.

Above photo (L-R): Dr. Leong Mun Kew, Deputy Director, Institute of Systems Science at the National University of Singapore (NUS); Assistant Professor, Erik Cambria from School of Computer Science and Engineering & College of Engineering, NTU

Dr. Leong Mun Kew, Deputy Director, Institute of Systems Science at the National University of Singapore (NUS) said that the universities are obviously conducting research into AI. But now they are also trying to use the AI products/ services for internal operations. Initial forays have been made into the area. Some things have been done, such as machine to machine communication and robotic process automation. Now, a chatbot is being built with a bit of deep learning. 

Al Davis, Director of Computing Systems at the Computational Resource Centre, Agency for Science, Technology and Research (A*STAR) said that they are evaluating the use of AI. A*STAR is a huge research organisation and there are some groups within A*STAR which are involved heavily in use of AI technology. Right now, machine learning, deep learning are taking advantage of the computational capabilities of HPC (High performance computing).

Above photo (L-R): Joshua Au, Head of Data Centre, A*STAR; Al Davis, Director of Computing Systems at the Computational Resource Centre, A*STAR

Mr. Davis added that he agreed with Prof. Cambria that we still don’t really understand how the brain works. We don’t have a good definition of what AI is. But AI will play an important role in areas such as data science. We are generating so much data that there is no way for humans to process the data. HPC driven modelling simulation is now adopting AI techniques.

Dr. Lim Lai Cheng, Executive Director, SMU Academy, Singapore Management University said that they started by commissioning a service chatbot for all their courses. But there is still a gap between expectations and reality. It is not good enough to be deployed yet.

Philip Heah, Senior Director (Technology & Infrastructure Group), Infocomm Media Development Authority (IMDA) talked about exploring AI for data centre cooling. IMDA has been working with the National Supercomputing Centre (NSCC) for this. Data centres form a key part of IT infrastructure and Singapore’s tropical climate poses big efficiency challenges.

Sensors were placed in NSCC’s data centres and unsupervised learning techniques (a type of machine learning algorithm used to draw inferences or find hidden patterns from datasets consisting of input data without labelled responses) were employed on the data. Though the target was cooling, other inefficiencies were also detected and actions were taken to not just reduce power consumption but also to improve efficiency in asset deployment.

Mr. Heah explained that this is still not full-scale use of AI, but rather batch based. The ultimate goal would be to use AI on the fly, on a continuous, real-time basis. Baby steps have been taken but they are yielding positive results.

The Ministry of Defence (MINDEF) is a big organisation and kind of a microcosm of society and there are variations in the level of AI use, said Eugene Chang, Director- Plans & Collaboration. The Ministry has been using AI a long time ago, much before AI was in vogue. There were decision support systems to aid commanders in making decisions. The Risk Assessment and Horizon Scanning (RAHS) system was developed to anticipate and analyse strategic issues with significant possible impact on Singapore.

But now there are many more areas where AI can be used and MINDEF is evaluating the use in those areas.

Paul Gagnon, Director, E-Learning, IT Systems and Services, Nanyang Technological University – Lee Kong Chian School of Medicine spoke about using a cognitive tutor to support student learning

Tam Kok Yan, Deputy Director-Enterprise Architecture from MINDEF added that the initiatives have to be explained to the senior management and the ROI has to be taken into consideration. Another important factor to be considered in the development of new technologies, including in AI, is their impact on organisational structure, on processes and policies.

Above photo (L-R): Ni De En, Director for Services & Digital Economy (SDE), National Research foundation, PMO; Lee Soo Hin, Deputy Director Business Architect, Ministry of Defence; Greg Malewski, Principal Enterprise Architect of the National Architecture Office at IHiS

Greg Malewski, Principal Enterprise Architect of the National Architecture Office at Integrated Health Information Systems Pte Ltd (IHiS) talked about looking at AI from the perspective of being the IT provider for the entire public healthcare system in Singapore. IHiS has to take all health clusters along on the IT journey. And there are numerous logistical, political and regulatory issues to be considered.

Low Lar Wee, Director & Head IT Application Division, Monetary Authority of Singapore (MAS) said that MAS is exploring the use of AI to detect incidences of activities like insider trading and syndicated trading. But it is not enough to detect a possible case of violation of regulations based on a model. It cannot be proven unless they can understand how the system arrived at that conclusion. This poses a big challenge for any subsequent legal action.

Hurdles

Expertise and complexity were identified as the two biggest hurdles in development and adoption of AI solutions, accounting for 36% and 32% of delegate responses respectively.

Mr. Davis from A*STAR’s CRC said that the complexity of the problems being addressed now go beyond the knowledge and expertise of a single person and usually involve collective groups. Personalised medicine is a great example. Many types of scientific, clinical and social data need to be integrated.

Mr. Low from MAS added that it is about the complexity of the whole ecosystem.

Prof. Zhang commented on the expertise aspect that it is important to identify the problems and then find the relevant expertise. Expertise can come from many different disciplines. So, recruiting strategies need to be modified.

Around 14% selected data quality as their most important hurdle (but nearly 63% of delegates at the session answered in a separate question that they need to improve data quality before they can have a meaningful AI initiative.)

Above photo (L-R): Wong Hong Kai, Director, Policy & Governance Directorate, Smart Nation and Digital Government Office; Low Lar Wee, Director & Head IT Application Division, Monetary Authority of Singapore; Charlie Foo, Vice President & GM, Asia Pacific & Japan, Mellanox Technologies

Wong Hong Kai, Director, Policy & Governance Directorate, Smart Nation and Digital Gove
rnment Office (SNDGO which is under the PMO is responsible for driving Singapore’s Smart Nation strategies) said that different agencies have different levels of digitisation. Analytics is the easy bit. It is not just that the data is unclean digital data. Some data is still in the form of hard copies. So, all the data has to be digitised, cleaned and labelled properly for it to be useful for research and policy analysis. The other issue is designating ‘single sources of truth’ for certain areas and other agencies using that agency’s data as the reference.

Mr. Heah from MINDEF agreed and said that the effort to clean up years of data is enormous.

Mr. Malewski said that the National Electronic Health Record has had a significant positive impact on workflows and data quality. It wasn’t done as preparation for AI, but it would help.

Dr. Leong pointed out that all five options in the polling question (lack of expertise, complexity, legacy processes and systems, poor data quality and creating a secured transitioning process and environment) are organisational issues, equally applicable to the adoption of cloud, big data or any other new technology trend for that matter.

Translating AI research from lab to market

Tian Wei Qi (below), Associate Director, Investment Group – Technology, Media and Telecom at Temasek, asked about gaps in translating the amazing work happening in the laboratories into commercially viable products.

Asst. Prof. Cambria pointed out a fundamental difference in how labs and companies approach AI. Companies are usually more interested in what AI can do for them and not so much in how it works.

For example, if you are just interested in getting a good tool that does machine translation, you will apply deep learning technique to go from syntactic structure of a sentence from one language to another. But a research lab would be interested in going to the understanding level first, like a human brain would do, and then go back from that encoding of the meaning to the translation in the target language.

He talked about the Singapore government’s S$150 million AI.SG initiative, which he said would help bridge this gap as it is tackling both fundamental and applied research in AI.

Then the other issue of the black-box nature of deep learning techniques was brought up again. The lack of transparency and absence of understanding leads to a trust deficit. The people and organisations who are actually consuming these technologies need to be able to trust them. This is a challenge which needs to be overcome before we can see an increased uptake of AI. 

PARTNER

Qlik’s vision is a data-literate world, where everyone can use data and analytics to improve decision-making and solve their most challenging problems. A private company, Qlik offers real-time data integration and analytics solutions, powered by Qlik Cloud, to close the gaps between data, insights and action. By transforming data into Active Intelligence, businesses can drive better decisions, improve revenue and profitability, and optimize customer relationships. Qlik serves more than 38,000 active customers in over 100 countries.

PARTNER

CTC Global Singapore, a premier end-to-end IT solutions provider, is a fully owned subsidiary of ITOCHU Techno-Solutions Corporation (CTC) and ITOCHU Corporation.

Since 1972, CTC has established itself as one of the country’s top IT solutions providers. With 50 years of experience, headed by an experienced management team and staffed by over 200 qualified IT professionals, we support organizations with integrated IT solutions expertise in Autonomous IT, Cyber Security, Digital Transformation, Enterprise Cloud Infrastructure, Workplace Modernization and Professional Services.

Well-known for our strengths in system integration and consultation, CTC Global proves to be the preferred IT outsourcing destination for organizations all over Singapore today.

PARTNER

Planview has one mission: to build the future of connected work. Our solutions enable organizations to connect the business from ideas to impact, empowering companies to accelerate the achievement of what matters most. Planview’s full spectrum of Portfolio Management and Work Management solutions creates an organizational focus on the strategic outcomes that matter and empowers teams to deliver their best work, no matter how they work. The comprehensive Planview platform and enterprise success model enables customers to deliver innovative, competitive products, services, and customer experiences. Headquartered in Austin, Texas, with locations around the world, Planview has more than 1,300 employees supporting 4,500 customers and 2.6 million users worldwide. For more information, visit www.planview.com.

SUPPORTING ORGANISATION

SIRIM is a premier industrial research and technology organisation in Malaysia, wholly-owned by the Minister​ of Finance Incorporated. With over forty years of experience and expertise, SIRIM is mandated as the machinery for research and technology development, and the national champion of quality. SIRIM has always played a major role in the development of the country’s private sector. By tapping into our expertise and knowledge base, we focus on developing new technologies and improvements in the manufacturing, technology and services sectors. We nurture Small Medium Enterprises (SME) growth with solutions for technology penetration and upgrading, making it an ideal technology partner for SMEs.

PARTNER

HashiCorp provides infrastructure automation software for multi-cloud environments, enabling enterprises to unlock a common cloud operating model to provision, secure, connect, and run any application on any infrastructure. HashiCorp tools allow organizations to deliver applications faster by helping enterprises transition from manual processes and ITIL practices to self-service automation and DevOps practices. 

PARTNER

IBM is a leading global hybrid cloud and AI, and business services provider. We help clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 3,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM’s hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM’s breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM’s legendary commitment to trust, transparency, responsibility, inclusivity and service.