“I love deadlines. I
love the whooshing noise they make as they go by.”
Most humans would be able to detect the sarcasm in the quote
above, even if it takes them a moment or two. But imagine making a computer
understand the sentiment expressed in the above sentence.
That is the sort of challenge, Dr. Erik Cambria (Assistant
Professor at the School of Computer Science and Engineering at Nanyang
Technological University) and his team at SenticNet
are trying to tackle. They are
dealing with the fundamental problems of natural language processing (NLP) for
sentiment analysis. Natural language, which is the language we use for
communicating with each other, is rather different from the way we communicate
with computers. Natural language is ambiguous, complex, chaotic. Constructed
languages, such as programming languages, adhere to strict rules and logic.
Wikipedia defines sentiment analysis as the use of NLP, text
analysis, computational linguistics, and biometrics to systematically identify,
extract, quantify, and study affective states and subjective information.
Applications involve analysing the positive, negative and neutral
sentiments in online customer reviews, surveys, feedback, social media postings
and this has great utility in range of fields, from marketing to finance and healthcare.
is much more complicated than it seems. For instance, if a statement is
sarcastic, as the one above, something which looks positive is actually
negative (love is hate). Understanding this polarity (whether a sentiment is
positive or negative) is a core aspect of sentiment analysis. It involves the
use of deep learning, psychology, and also linguistics, demonstrating the
multi-disciplinary nature of the field.
learning helps detect some patterns, such as the usual occurrence of a big
shift in polarity in a sarcastic comment (positive followed by negative), linguistics
provide insights on sentence structure, while psychology is important because
whether a statement is sarcastic or not can be dependent on the personality of
another example, saying “This phone is expensive but nice” is not the same as
saying “This phone is nice but expensive.” In fact, the sentiments expressed
are polar opposites, though the words used are the same. Here, the
understanding of sentence structure based on linguistics is key. When the ‘but’
conjunction is used, positive followed by negative yields negative but negative
followed by positive yields positive.
understand the approach of SenticNet to dealing with such challenges and
improving sentiment analysis, we need to look at its origins.
Origins from a commonsense knowledge base
started as a project in MIT Media Lab in 2009.
this big knowledge base of commonsense and I thought, why don’t we use it
for sentiment analysis,” Dr. Cambria said, “back then, sentiment analysis was
not very popular but in the past few years, its popularity has increased
dramatically. Because of the research challenges, and also because of
the business opportunities. For instance, so many companies want to know
what their customers like about their products.”
research, commonsense knowledge is the collection of facts and information that
an ordinary person is expected to know. Facts so obvious, so trivial, that no
one would think of mentioning them explicitly, like a chair is for sitting
down, or that we drink water to quench our thirst.
Natural language is only
used to communicate knowledge which we don’t have based on shared experience. The
challenge is to get this general knowledge that most people possess,
represented in a way that it is available to AI programs.
base here refers to a semantic network with millions of nodes, connected by
links that encode the commonsense piece of information. For example, beer and drink
could be two nodes and connecting the two would represent the taken-for-granted
information that “beer is a drink”.
The MIT Media Lab has a portal called the Open Mind Common Sense
(OMCS), which collects pieces of knowledge from volunteers on the Internet by
enabling them to enter commonsense into the system with no special training or
knowledge of computer science.
the web would answer questions like– “what is a bed used for?”, “what is a
beer for?”, “where do you usually find the knife?”. Only those answers
which occurred more than a few times would be inserted into the semantic graph.
“If many people said that the bed is for sleeping, you take that as a good
piece of commonsense” Dr. Cambria said.
ConceptNet is a semantic
network based on the information in the OMCS database. SenticNet was built
based on ConceptNet, focusing on concepts that are either positive or negative,
because the eventual objective of SenticNet is to conduct sentiment analysis.
as just a knowledge base, then from there we went on into the fundamental
problems of natural language processing for sentiment analysis. While
before we were just focusing on knowledge representation, later we got more and
more interested in commonsense reasoning and linguistics. We went from having
just SenticNet to having Sentic patterns and other
reasoning techniques like AffectiveSpace and things that altogether allow us to
do sentiment analysis in a human-like way,” Dr. Cambria said describing
the evolution of SenticNet.
Machine learning is not enough
said, “We try to take inspiration from how the human brain actually understands
things, which is a very different approach from pure machine learning.”
difference between Sentic computing and other techniques is that Sentic
computing is a hybrid approach that uses machine learning alongside
knowledge representation, reasoning and linguistics.
With recent developments in machine learning methods like deep
networks, most researchers are pinning their hopes on feeding massive volumes
of data to algorithms. Dr.
Cambria believes that commonsense is key to improving AI. Simply relying only
on statistics, probabilities, co-occurrence frequencies is not enough.
He went on to highlight three big issues with machine
learning. The first is ‘Dependency’, as machine learning requires a lot of
training data and is domain-dependent.
The second issue is ‘Consistency’, as changes or tweaks in
the learning model may lead to different results. The third is ‘Transparency’,
that is, the way machine learning performs decision-making is a black box. We
do not know why the algorithms arrived at the conclusions they did. In fact,
this very same fact makes machine learning a powerful tool. Researchers don’t
need to understand the data. They
can just feed data to a neural network or whatever learning algorithm they are
using, this learns the features automatically, and then it takes decisions.
But we never know why the algorithm takes those decisions. This lack of
transparency can be a major problem if we are using AI to perform activities
that involves ethics like, say, selecting candidates for a job opening.
In the context of NLP, Dr. Cambria said that these issues
are crucial because, unlike in other fields, they prevent AI from achieving
human-like performance. AI researchers need to bridge the gap between
statistical NLP and many other disciplines that are necessary for understanding
human language, such as linguistics, commonsense reasoning, and affective
computing (affective computing is the study and development of systems and
devices that can recognise, interpret, process, human affects or emotions).
Coupling top-down and
Because of the reasons discussed above, Dr. Cambria
advocates a combination
of symbolic and sub-symbolic AI. Symbolic models, such as semantic networks,
represent a top-down approach to encode meaning. Sub-symbolic methods, such as
neural networks, represent a bottom-up approach to infer syntactic patterns
from data (syntax is the set of rules, principles, and processes that govern
word order and sentence structure). The top-down approach helps gain
transparency, while data-driven deep learning enables the automatic detection
In a paper titled “SenticNet 5: Discovering Conceptual
Primitives for Sentiment Analysis by Means of Context Embeddings”, Dr.
Cambria along with his co-authors explores how the two approaches might
complement each other. The paper talks about the use of the bag-of-concepts
model (as opposed to bag-of-words in which a text is represented as a bag or
set of its constituent words) for sentiment analysis. The bag-of-concepts has
the advantage over bag-of-words of being able to deal with multiword
expressions like ‘pretty ugly’ or ‘sad smile’, which would be split up in the
latter model and hence lose their polarity, i.e., their positive or negative
meaning (as in pretty used as an adjective rather than an adverb). And it
avoids the blind use of keywords and word co-occurrence counts.
But now the problem is that the bag-of-concepts model cannot
achieve a comprehensive coverage of meaningful concepts, i.e., a full list of
multiword expressions that actually make sense. Models could be used to extract
concepts from raw data but such approaches are prone to errors due to the richness
and ambiguity of natural language. This is based on the idea that there is a
finite set of mental primitives for affect-bearing concepts and a finite set of
principles of mental combination governing their interaction.
The paper goes on to propose the generalisation of concepts
with related meaning, such as ‘munch toast’ and ‘slurp noodles’, into the
conceptual primitive ‘EAT FOOD’. Sub-symbolic AI could now be used to automatically
discover the conceptual primitives that can better generalise SenticNet’s
This approach would also help in tackling the symbol
grounding problem. Our understanding of language is grounded in the physical
world, in sensations, in memory. A computer does not learn meaning like that. A
meaning of a word on a page or computer screen is ungrounded. And looking it up
in a dictionary would not help.
explains the problem like this: “If I tried to look up the meaning of a word I
did not understand in a (unilingual) dictionary of a language I did not already
understand, I would just cycle endlessly from one meaningless definition to
another. My search for meaning would be ungrounded. In contrast, the meaning of
the words in my head — the ones I do understand — are
"grounded" (by a means that cognitive neuroscience will eventually
reveal to us). And that grounding of the meanings of the words in my head
mediates between the words on any external page I read (and understand) and the
external objects to which those words refer.”
In the approach presented in the paper, several adjectives
and verbs are defined in function of only one ‘primitive’ item thereby
grounding those meanings in that one primitive. It does not solve the symbol
grounding problem but reduces it.
research is being applied in several projects spanning from
fundamental knowledge representation problems to applications of commonsense
reasoning in contexts such as big social data analysis and human-computer
instance, a project in collaboration with Prof. Roy Welsch from MIT
Sloan School of Management focuses on natural
language based financial forecasting (NLFF). Markets are driven by
sentiments. Understanding those sentiments from data can be used for predicting
is also developing tools that allow patients to easily and efficiently measure
their health related quality of life and improving human-computer interaction (HCI) by developing dialogue systems
project, called PONdER (Public Opinion of Nuclear Energy) aims to
collect, aggregate, and analyse opinions towards nuclear energy in different
languages and across Singapore, Malaysia, Indonesia, Thailand, and Vietnam.
Understanding how the public perceives nuclear energy in the region enables
policymakers to make informed national policies and decisions pertaining to
nuclear energy, as well as shape communication strategies to inform the public
about nuclear energy.
said that personally he is more interested in the fundamental problems of
AI and sentiment analysis. For example, solving the symbol grounding
problem or building machines that can really understand language (IQ),
emotions (EQ), and culture (CQ).
still don’t have machines that really understand natural
language. Siri does not understand natural language, Watson is an amazing answering
machine but it does not understand language. At SenticNet, we want to go beyond
rule-based and stats-based systems. What we are working on is
not really NLP research anymore; it is natural language understanding.”
A multidisciplinary team of Massachusetts Institute of Technology (MIT) researchers led by Iddo Drori, a lecturer in the MIT Department of Electrical Engineering and Computer Science (EECS), has used a neural network model to solve university-level math problems at a human level in a matter of seconds.
“It will help students improve, and it will help teachers create new content, and it could help increase the level of difficulty in some courses. It also allows us to build a graph of questions and courses, which helps us understand the relationship between courses and their pre-requisites, not just by historically contemplating them, but based on data,” Iddo explained, also an adjunct associate professor at Columbia University’s Department of Computer Science.
Additionally, the model automatically explains solutions and rapidly generates new math problems for university-level courses. When the researchers presented these machine-generated questions to university students, the students were unable to distinguish whether the questions were created by a human or an algorithm.
This approach might be used to simplify the creation of course content, which would be particularly beneficial for big residential courses and massive open online courses (MOOCs) with thousands of students. The technology might also be used as an automated tutor that demonstrates to students how to solve basic math problems.
In the past, researchers employed a neural network, such as GPT-3, that was merely pretrained on the text like it was shown millions of examples of text to learn the patterns of natural language. This time, they employed a neural network that was trained on the text and “tuned” on code.
A machine learning model can perform better by using this network, known as Codex, which is effectively an additional pre-training procedure.
The model was exposed to millions of code examples from internet repositories. As the training data for this model contained millions of natural language words and millions of lines of code, it learns the relationships between text and code.
The machine-generated questions were evaluated by showing them to university students. The researchers assigned students 10 problems from each undergraduate math course in random order; five questions were prepared by people and the remaining five were generated by a computer.
Students were unable to discern whether the machine-generated questions were produced by an algorithm or a human, and they scored the difficulty level and course-appropriateness of questions generated by humans and machines similarly.
Researchers emphasised that this effort is not meant to take the place of actual teachers. They claim that although automation has reached 80 per cent accuracy, it will never reach 100 per cent. Every time someone figures something out, someone else will pose a more challenging problem.
Simply this work opens the door for people to begin using machine learning to answer ever-harder questions, and academics are optimistic that it will have a significant impact on higher education.
The team has expanded the work to handle math proofs because of the approach’s effectiveness, although there are several limits they intend to address. Due to computational complexity, the model is currently unable to answer questions with a visual component or resolve computationally intractable issues.
The model is being scaled up to hundreds of courses in addition to these obstacles. They will produce more data with those hundreds of courses, which they may use to improve automation and offer perceptions into course design and curricula.
The Science and Technology Academic and Research-Based Openly Operated Kiosks or STARBOOKS of the Department of Science and Technology (DOST) have arrived on the island of San Miguel in Tabaco, Albay, providing easy access to S&T learning.
STARBOOKS is the country’s first digital science library, created by the Science and Technology Information Institute (DOST-STII). It is a stand-alone information source intended for those who have limited or no access to S&T information resources.
The project’s goal is to provide Science, Technology, and Innovation (ST&I) content to geographically isolated schools and communities across the country. STARBOOKS contains many digitized S&T resources in various formats such as text and video or audio organised in specially designed “pods” with an easy-to-use interface.
STARBOOKS, as SMNHS teacher John Darnell Balbastro put it, is “one way of elevating the scientific and technological literacy” of their students. Its wide range of digitised S&T resources in various formats will “intensify the curiosity among our young learners,” and its offline access will address the lack of S&T learning resources in San Miguel.
Through this programme, DOST Region V, in collaboration with its dedicated Provincial S&T Centres and implementers, will continue to promote and empower S&T knowledge and education.
Meanwhile, Jamaica Pangasinan, Senior Science Research Specialist at the Space Mission Control and Operations Division (SMCOD) of the Philippine Space Agency (PhilSA), said that she was impressed by the level of environmental and social awareness of the incoming senior high school students, which was shown in their work at the “LIFT OFF: PhilSA Space Science Camp 2022.”
She said that the mission goals showed how eager the students were to solve the problems and threats facing the environment right now.
Fourteen science high schools from the 16 divisions of Metro Manila chosen by the Department of Education (DepEd) to attend the camp presented their space missions. Each team had five (5) minutes to talk about their satellite’s mission, its most important technical features, and why it was important.
The students came up with a wide range of missions, from observing Earth to keeping an eye on space junk to sending probes to other planets.
Only two missions were better than the rest. These are the Monitoring Illegal Mining Activities in Remote Areas (MIMA) by Bianca Louise B. Cruz and Oscar A. Araja II of the City of Mandaluyong Science High School, and the Venus Seismic Activity Monitoring Satellite (V-SAMS) by Peter James Lyon and Ysabela Juliana Bernardo of the Caloocan City Science High School.
The students who work on MIMA said that the goal of their satellite mission is to protect the environment and make sure that mining laws and rules are followed better in the country. Based on their plan, MIMA would be a Synthetic Aperture Radar (SAR) satellite that could see through clouds to spot changes in areas where mining could be happening. It would take pictures with the help of optical imagers.
The goal of V-SAMS, on the other hand, would be to learn more about Venus, which is like Earth’s twin, and especially about its earthquakes. To do this, V-SAMS would use infrared imaging to track the surface temperature of Venus’s volcanoes, figure out which ones will erupt, and find other volcanoes that are still active on the planet.
It would also have an interferometric SAR (InSAR) to look for changes on Venus’s surface and signs of earthquakes. V-SAMS would also have an optical payload that would let it take high-resolution pictures.
The National Environment Agency (NEA) and the Singapore Land Authority (SLA) have signed a Memorandum of Understanding (MOU) to develop the use of Global Navigation Satellite System (GNSS) data from SLA’s Singapore Satellite Reference Network (SiReNT) to help NEA better monitor island-wide atmospheric moisture. The goal of the five-year partnership is to help Singapore with weather monitoring by giving it more data and making it easier to do exploratory studies for weather forecasting.
“The collaboration between NEA and SLA highlights our commitment to achieve synergies and tap on enablers across the public sector. This partnership provides a platform for NEA to utilise SLA’s expertise in GNSS data collection and processing, enabling NEA to explore non-traditional methods to enhance our weather monitoring and forecasting capabilities,” says Luke Goh, CEO, NEA.
On the other hand, Colin Low, CEO of SLA, said that SLA’s partnership with NEA is a part of its ongoing efforts to collaborate with parties from the public and commercial sectors to open up new applications for SiReNT and its other geospatial products.
The SLA believed combining the knowledge of multiple parties might lead to more innovation and the discovery of workable solutions that could be advantageous to Singapore and the industries.
Colin continued by saying that they are eager to collaborate with NEA to research the unique uses of SiReNT data for improved weather monitoring and research projects on weather forecasting and climate change. The many experiences that were gathered and shared during this partnership will serve as a foundation for upcoming developments in this area.
The production of accurate weather forecasts, climate monitoring, and timely warnings of dangerous weather events all depend on meteorological measurements. The Meteorological Service Singapore (MSS) routinely gathers a variety of observational data from ground-based and aircraft sensors, such as temperature, wind, and moisture.
To measure these weather components at various altitudes of the atmosphere, sensors linked to a weather balloon are routinely launched twice a day at MSS’ Upper Air Observatory (UAO). To enhance the sounding data from the weather balloon, MSS erected a GNSS reference station at UAO in 2019.
This station will provide continuous estimates of moisture in an atmospheric column known as the integrated precipitable water vapour.
In accordance with the MOU, SiReNT will incorporate MSS’s GNSS station, giving MSS access to continuous, almost real-time atmospheric moisture readings for the entire island. By supplying greater resolution and more frequent observation data, this non-conventional moisture data will complement MSS’s current observation network data and enable research into possible uses for weather forecasting.
The partnership will also help SLA’s SiReNT station network, which now consists of nine reference stations dispersed throughout Singapore, grow. The network will grow to 12 stations with more data receivable with the installation of NEA’s GNSS base receiver station at UAO that will be integrated into SiReNT and two anticipated additional coastal SiReNT reference stations. The SiReNT system can create precise positioning data with an accuracy of up to 3 cm and correct positional inaccuracies in GNSS signals.
The SiReNT technology fosters innovation across a range of sectors, including autonomous driving, logistics and automation in the building industry, and monitoring of changes in Singapore’s land height and sea level.
The addition of stations by the end of 2022 will further increase the stability of the services and applications SiReNT now supports in several important industries. It can also be used in novel ways for scientific research on climate change.
Several domestic banks in Vietnam have 90% of their transactions conducted on digital platforms, surpassing the target of 70% set for 2025. Half of the country’s banking services are expected to be digitalised and 70% of transactions will be carried out online by 2025.
The Vietnamese Prime Minister, Pham Minh Chinh, recently stated that the banking sector has played a significant role in national digital transformation by deploying products and services for people and businesses. He urged the sector to further reform its management methods towards modernity and transparency and diversify and improve the quality of its products and services to curb money laundering.
Addressing an event called, “Digital Transformation Day of the Banking Sector” Chinh explained that the sector should work to understand more about the demands of people, businesses, and credit institutions to devise suitable legal documents, facilitating the application of digital technologies in banking services.
He asked the State Bank of Vietnam (SBV) to continue its close coordination with ministries and agencies to formulate a decree on cashless payments and submit it to the government. Common infrastructure such as payment and credit information infrastructure should be promoted. He said suggested stronger connectivity between banks and credit organisations.
Chinh also requested the sector ensure cybersecurity and safety in digital transformation, given the rise of high-tech crime. The sector should raise public awareness about the benefits of digital transformation, enhance personnel training capabilities, and boost international cooperation in digital transformation.
The Prime Minister also attended an exhibition showcasing products and services that promote the digital transformation of the banking sector. Chinh had a working session with representatives from the SBV and commercial banks. He congratulated the sector on its effective operations amid a host of difficulties, especially those caused by the COVID-19 pandemic. He suggested the sector further cut interest rates to support businesses and actively engage in the state’s policies, particularly housing credit for workers and low-income earners. Participants attributed the developments of banks to supportive policies adopted by the state, the management of the government, and stability in the country.
Vietnam’s financial technology market could grow to US$ 18 billion by 2024. The country is a leader among ASEAN members in terms of the volume of financing for fintech, second only to Singapore. Over 93% of all venture investments in the country are directed at e-wallets and the e-money segment. The total number of fintech companies has grown to 97 since 2016, an 84.5% increase. However, the number of newly-launched start-ups each year decreased from 11 to 2.
As OpenGov Asia reported, the market features high competitiveness and a high entry bar. Transaction volume has seen a 152.8% growth since 2016, with 29.5 million new fintech users. As a result, every second Vietnamese citizen uses at least one fintech service. Demand for digital services (transactions, payments, and wallets) in the country is high. According to industry analysts, Vietnam’s fintech sector is young and promising. The market valuation has increased from US$ 0.7 billion to US$ 4.5 billion since 2016.
Michael G. Regino, President and CEO of SSS, announced that self-employed, volunteer, non-working spouses, and land-based Overseas Filipino Workers can pay their contributions through the online method of their choice. This was done in cooperation with the different financial and private sectors.
“We encourage our members and employers to pay their contributions using our online channels as through these payment facilities, they no longer must go to our branches. These can be accessed at the safety and convenience of their homes or offices,” says Michael.
Individual members may furthermore use the websites and mobile apps of other SSS-accredited collecting partners, such as most banks in the public and commercial sectors of the nation. However, both commercial and domestic employers have access to online payment methods.
SSS is a publicly funded social insurance programme that the Philippine government requires to provide coverage to all wage earners in the private, public, and unorganised sectors.
The agency is mandated to set up, develop, promote, and perfect a sound, tax-free social security system that fits the needs of everyone in the Philippines. This system should encourage social justice through savings and protect members and their beneficiaries from the risks of disability, illness, maternity, old age, death, and other things that could cause a loss of income or a financial burden.
OpenGov Asia earlier reported that digitalising SSS pension fund services remain one of the top priorities in the Philippines and that more online services will be added to its digital channels.
More than 30 member services and more than 20 employer services are currently easily accessible on the SSS website. Transactions for membership, contributions, loan granting and repayment, and benefit distributions are only a few examples of the services offered. Other SSS internet platforms also extend some of these features.
Further, almost all new online services are made available via the agency’s website, which serves as its main online platform. However, more work is being done to make the services on this portal accessible to smartphone users via the SSS Mobile App.
The agency is slowly making it mandatory for its programme to be done online. Those who don’t have their own way to do business online can use the e-Centres in branches.
In the meantime, the Department of Education (DepEd) worked with the Young Southeast Asian Leaders Initiative (YSEALI) and exchanged alumni to improve education about climate change through an online programme called Climate Changemakers.
The National Educators Academy of the Philippines (NEAP) has recognised Climate Changemakers as the first climate change training course as part of the Department’s Professional Development Priorities.
Through online training and other digital education initiatives, the programme aims to make teachers better able to teach climate change skills, integrate climate change skills, and act on climate change in the country.
The ten-week online course, which used synchronous and asynchronous modalities to address common misconceptions about climate change, was successfully completed by 400 instructors. Additionally, it gave teachers a place to consider their own learning, exchange difficulties and effective methods.
The Young Southeast Asian Leaders Initiative Professional Fellows Program (YSEALI PFP) is a two-way exchange programme run by the U.S. Department of State. Its goal is to help young leaders from different countries in Asia and the United States to get to know each other better and strengthen economic relationships.
Data is information that has been organised in a way that makes it simple to move or process. It is a piece of information that has been converted into binary digital form for computers and modern methods of information transmission.
Connected data, on the other hand, is a method of displaying, using, and preserving relationships between data elements. Graph technology aids in uncovering links in data that conventional approaches are unable to uncover or analyse.
Different sectors have invested in big data technologies because of the promise of valuable business insights. As a result, various industries express a need for connected data, particularly when it comes to connecting people such as employees or customers to products, business processes and other Internet-enabled devices (IoT).
In an exclusive interview with Mohit Sagar, CEO and Editor-in-Chief of OpenGov Asia, Chandra Rangan, Chief Marketing Officer of Neo4j shared his knowledge on how a connected data strategy becomes of paramount importance in building a smart nation.
Connected data enables businesses
A great example of the power of graph technology, and a very common use case for Neo4j, is its use in the financial sector to uncover fraud. Finding fraud is all about trying to make connections and understand relationships, Chandra elaborates. A graph-based system could detect if fraud is taking place in one location and determine if the same scenario has occurred in other locations.
“How does one make sense of this? Essentially, you are traversing a network of interconnected data using the relationships between that data. Then you begin to see patterns develop and these patterns provide you with answers so that you can conclude whether there is fraud.”
What is of great concern is that fraud is occurring with much greater frequency and with a higher success rate nowadays. The key to stopping and mitigating the impact is time. Instead of detecting a fraud that occurred hours or days ago,
“What if the organisation could detect it almost immediately and in real-time as it occurs?” asks Chandra. “Graph offers this kind of response and is why it’s a great example of value!”
Supply chain and management are other excellent examples of RoI. One of Neo4j’s clients, which operates arguably the largest rail network in the United States and North America created a digital twin of the entire rail network and all the goods. With graph technology across their network, they can now do all kinds of interesting optimisation much faster, leading to better, more efficient outcomes for their entire system.
The pandemic has taught the world about the value and fragility of supply chains. Systems across the globe are being reimagined as the world’s economy realise the need to become more digital and strategic. More supply sources, data, data sharing, customer demands, and increased complexity necessitate modern, purpose-built solutions.
Apart from all the new expectations and requirements for modern supply chains, systems need to and are becoming more interconnected because of new technologies.
Maintaining consistent profitability is difficult for firms with a high proportion of assets. Executives must oversee intricate worldwide supply chains, extensive asset inventories and field operations that dispatch workers to dangerous or inaccessible places.
With this, organisations need a platform that connects their workforces and makes them more capable, productive and efficient. A platform that provides enterprises with real-time visibility and connectivity, while also assuring efficiency, safety, and compliance.
Modern technologies are required to improve interconnectivity, maximise the value of data, automate essential procedures, and optimise the organisation’s most vital workflows.
Modern data applications require a connected platform
“When we programme, when we create applications, we think in what we are calling a graph. This is the most intuitive approach that you can have,” says Chandra.
Any application development begins with understanding the types of questions people want to solve and then mapping it to a wide range of outcomes that they want to achieve. These are typically mapped in what is known as an entity relationship diagram.
Individuals’ increased reliance on systems that work in a way that makes sense to them and supports them has increased criticality. And frequently, when these systems fail, Neo4j makes sense of complexity and simplifies what needs to be done, resulting in a significant acceleration.
As the world becomes more collaborative, integrated, and networked, nations must respond more quickly to changes in their business environment brought on by the digital era; otherwise, they risk falling behind or entering survival mode.
The proliferation of new technologies, platforms, and devices, as well as the evolving nature of work, are compelling businesses to recognise the significance of leveraging the most recent technology to achieve greater operational efficiencies and business agility.
A graph platform connects individuals to what they require, and when and when they require it. It augments their existing process by facilitating the effective recording and management of personnel data. Neo4j Graph Data Science assists data scientists in finding connections in huge data to resolve important business issues and enhance predictions.
Businesses employ insights from graph data science to discover activities that point to fraud, find entities or people who are similar, enhance customer happiness through improved suggestions, and streamline supply chains. The dedicated workspace combines intake, analysis, and management for simple model improvement without workflow reconstruction.
As a result, people are more engaged, productive, and efficient with connected data. Nations can bridge information and communication gaps between executive teams, field technicians, plant operators, warehouse operators and maintenance engineers. Increasing agility and productivity offers obvious commercial benefits.
In short, organisations easily integrate their whole industrial workforce to increase operational excellence and decrease plant downtime, hence maximising revenues. This methodology is based on a collaborative platform direction.
Contextualising data increases its value
According to Chandra, data is a representation of the world in which people live, and people use data to represent this world. As a result, the world is becoming more connected, and people no longer live in silos and continue to be associated in society.
“If you think about data as the representation of the world that we live in, it is connected data and we can deal with all the complexities that we need to deal with when we try to make sense out of it,” explains Chandra.
Closer to home, connected data is crucial to Singapore’s development as a smart nation. “Connected data is at the centre of each of those conversations around developing the nation. When you think of Singapore as a connected ecosystem and when you think about citizens, services, logistics, contract tracing, and supply chain.”
Chandra believes that the attributes have saved the connection between data and people, which is why connections are important. Once people understand those connections, it becomes much easier and much faster to derive the insights that are required.
Without connected data, organisations lack key information needed to gain a deeper understanding of their customers, build a complete network topology, deliver relevant recommendations in real-time, or gain the visibility needed to prevent fraud.
Thus, “knowing your customer is understanding connected data.” With the right tools, data may be a real-time, demand-driven asset that a financial institution can utilise to reinvent ineffective processes and procedures and change how it interacts with and comprehends its consumers.
“Me as a person – who I am, my name, where I live – these are all properties of who I am. But what really makes me me, are the relationships I have built over time. And so, the notion that almost every problem has data that you can really make sense of with graphs is the larger “Aha” moment,” Chandra ends.
Legacy systems are still in use pieces of hardware or software that are out of date. These systems frequently have problems and are incompatible with more modern ones. Although they can be used in the manner intended by their creators, they cannot be improved.
It is the backbone of many excellent organisations, since they utilise software, apps, and IT solutions that are crucial to the general operation of the business but are obsolete and, in some cases, no longer supported by the original software vendor or developer.
While running legacy systems may not appear to be a big deal, they do present a unique set of challenges and potential issues that organisations would be remiss to ignore.
Thus, obsolete legacy systems are at best a nuisance and, at worst, can undermine an organisation’s entire IT security strategy, severely impeding productivity. Furthermore, the longer a company waits to modernise a legacy system, the more difficult the transition becomes.
However, system modernisation is always a prerequisite for digital transformation. Most firms will be unable to fully grasp the benefits of new technologies and solutions without it.
Due to the rapid development of technology, businesses must maintain compatibility with legacy systems that impede the implementation of contemporary technologies.
With this, the Centre for Strategic Infocomm Technologies (CSIT) employs technology to facilitate and advance Singapore’s national security. Due to the environment’s highly secret nature, it must be air-gapped.
This means that development and deployment are conducted in networks that are not connected to the internet. Consequently, all platforms had to be installed on-premises.
Despite not being able to utilise internet-connected services, CSIT has a Cloud Infrastructure and Services section that offers developers the necessary infrastructure to concentrate on software development.
Further, a monolith system is a big application consisting of code built by several developers over many years. Frequently, the code is inadequately maintained. Some of these developers may have left the development team or the organisation, leaving knowledge gaps.
Due to a lack of expertise and the difficulty of modifying a system that is constantly in use in production, refactoring the code is comparable to replacing the tyres on a moving car.
Having a legacy system result in greater maintenance and support costs and decreased efficiency. Since the monolith system was still essential, CSIT opted to adopt a more manageable strategy by decomposing it into smaller services using the microservices methodology.
Microservices, on the other hand, are software programmes that execute a business function as part of a larger system yet are separate services. These services are intended to be lightweight and straightforward to implement.
Microservices have the following advantages: each service is independently scalable; services have smaller code bases that make them easier to maintain and test; and problems are isolated to a single service, allowing for faster troubleshooting.
In addition, there are two main microservice architectures to consider when implementing the microservices approach. Each has advantages and disadvantages that correspond to specific use cases as Orchestration, as the name suggests, necessitates an orchestrator actively controlling the work of each service, whereas Choreography takes a less stringent method by allowing each service to carry out its work freely.
Microservices architecture may not be appropriate for all projects and choosing an architecture should be based on the needs of the project; therefore, CSIT advised to expect new problems to arise and be prepared to adapt to them.