U.S. researchers combined large sets of real-world solar data and advanced machine learning to study the impacts of severe weather on solar farms, and sort out what factors affect energy generation. Their results were published earlier this month in the scientific journal Applied Energy. This research was supported by the Department of Energy’s (DOE) Solar Energy Technologies Office and was conducted in partnership with the National Renewable Energy Laboratory.
Hurricanes, blizzards, hailstorms and wildfires all pose risks to solar farms both directly in the form of costly damage and indirectly in the form of blocked sunlight and reduced electricity output. Two U.S. researchers scoured maintenance tickets from more than 800 solar farms in 24 states and combined that information with electricity generation data and weather records to assess the effects of severe weather on the facilities. By identifying the factors that contribute to low performance, they hope to increase the resiliency of solar farms to extreme weather.
Trying to understand how future climate conditions could impact our national energy infrastructure, is exactly what we need to be doing if we want our renewable energy sector to be resilient under a changing climate. Right now, we are focused on extreme weather events, but eventually we will extend into chronic exposure events like consistent extreme heat.
The research team first used natural-language processing, a type of machine learning used by smart assistants, to analyse six years of solar maintenance records for key weather-related words. The analysis methods they used for this study has since been published is freely available for other photovoltaic researchers and operators.
While hailstorms tend to be very costly, they did not appear in solar farm maintenance records, likely because operators tend to document hail damage in the form of insurance claims. Instead, she found that hurricanes were mentioned in almost 15% of weather-related maintenance records, followed by the other weather terms, such as snow, storm, lightning and wind.
The lead author on the paper stated that some hurricanes damage racking — the structure that holds up the panels — due to the high winds. The other major issue they have seen from the maintenance records and talking with our industry partners is flooding blocking access to the site, which delays the process of turning the plant back on.
The researchers combined more than two years of real-world electricity production data from more than 100 solar farms in 16 states with historical weather data to assess the effects of severe weather on solar farms. They used statistics to find that snowstorms had the highest effect on electricity production, followed by hurricanes and a general group of other storms. Then they used a machine learning algorithm to uncover the hidden factors that contributed to low performance from these severe weather events.
The lead author said that statistics gives part of the picture, but machine learning was really helpful in clarifying what are those most important variables. The researchers ended up with a suite of variables and machine learning was used to hone in on the most important ones. The team found that across the board older solar farms were affected the most by severe weather. One possibility for this is that older solar farms had more wear-and-tear from being exposed to the elements longer
The researchers are currently expanding this work to look at the effects of severe weather on the entire electrical grid, add in more production data, and answer even more questions to help the grid adapt to the changing climate and evolving technologies.
Confident that a child’s handwriting can be the basis of his future success in the field of learning, the Philippines has prioritised looking into the details while at the same time developing a health database for it. Specifically, the Philippine Council for Health Research and Development (PCHRD) has allocated over PHP 3.2 million (US$ 61.32 thousand) in funding for a project that aims to develop a tool that could assess the handwriting of children.
The project called i-SULAT (Intelligent Stroke Utilisation, Learning, Assessment, and Testing) aims to create a system and unified handwriting tool that could help solve the problems of inter-tool scoring variations, inconsistency, incongruence, and assessment time, according to de la Peña. The project is headed by Edison Roxas of the Electronics Engineering Department of the University of Santo Tomas (UST) and will run from April 2022 to June 2024.
“There is much information that can be gathered from the simple pencil grasp, speed and legibility of handwriting; and even different stroke patterns. The proposed solution is to gather these available data from handwriting stroke patterns and (get the) distinct features using a specialised smart pen,” said Fortunato de la Peña, Secretary, Department of Science and Technology. He added that the proposed iSULAT system is capable of getting distinct features through different handwriting stroke patterns for continuous analysis, and evaluation of children’s handwriting.
The Roxas’ project will define a reference normative database of Filipino school-aged children’s handwriting using the traditional tools:
- Test of Visual-Motor Skills (TVMS)
- Minnesota Handwriting Assessment (MHA)
- Evaluation Tool of Children’s Handwriting (ETCH)
ETCH can be used to assess and evaluate handwriting with or without impairments. Moreover, it will also determine significant handwriting parameters for a quantitative assessment of children’s handwriting. Moreover, it will develop a smartpen equipped with a software-based iSULAT system.
While handwriting may not warrant as much attention for many parents, it is actually a process that shows a lot about the child. There are two key aspects that educators look into:
- Product: How does the final written outcome look? Do the letters follow accepted guidelines on how to write a particular letter or number?
- Performance: How did the process go? Did he hold the pen right? How long did it take the student to finish?
De la Peña noted that the children’s handwriting database from the project can be used as a reference for future studies and further analysis involving handwriting assessment of individuals having different medical, neurological, and psychological conditions such as stroke, Parkinson’s disease, Attention-Deficit Hyperactivity Disorder (ADHD), Dyslexia, and even early onset of depression. The DOST chief said the failure to attain handwriting competency during the school-age year results in negative effects on both academic success and self-esteem.
Indeed, digitisation can spell a lot of benefits for everyone. With its digital adoption, the Philippines is in a better position to move education to greater levels in the country. Without digital, assessing a seemingly simple task as a student’s handwriting becomes a lot more manageable.
In the wake of the pandemic, people across the world moved comprehensively online = for work, education, entertainment, shopping and financial transactions. This has dramatically increased the surface area for attacks and created unprecedented op[portunties for bad cyber actors.
The simple answer to the challenges faced by the financial services industry and other agencies is to make better use of all available data and advanced analytics to detect and prevent fraud.
Of course, this may be easier said and done. In fact, the plethora of tools, solutions and platforms available may make the task more complicated. The following provides some starting points.
Understand the Categories of Fraud-Detection Tools
The ‘market’ is flooded with potential solutions, all offering to address fraud. The utility of each toolset relies on the business context and available data. All need to be integrated with business processes and supported by policy settings.
Here is a short overview that can assist in mapping such tools in terms of their function(s).
Detect Known Knowns
A watch list that holds information about known criminal entities (people, organisations, addresses, events, etc) is a good, universal start-point.
The challenge is matching the known entity against a new transaction. Simple name-matching systems tend to be quickly overwhelmed with irrelevant matches (imagine searching for Mr Jones on Google – around 5,070,000,000 results!).
Data science can assist here by establishing a probabilistic matching system with variable threshold settings. An organisation can then match the threshold settings to match its risk tolerance.
The next level of detecting suspicious entities is to see the connection between a current transaction and previously identified fraud. It could be as simple as ‘this person lives at the same address’ to ‘the phone number used has been used to commit fraud before’ and countless variations on this theme.
Some of the most effective network analytics systems used for fraud detection use non-obvious data. For example, the links may well be established by connecting IP addresses, MAC codes etc. Some of the best data may well reside in system logs!
Predictive models examine available data against known patterns associated with fraud. At a basic level, the technique can utilise simple attribute matching (eg gender, age, nationality, etc) but more sophisticated tools can substantially increase the accuracy and consume hundreds of variables.
Predictive models are usually based on data analytics but it is also possible to build intelligence-based models when current data holdings do not support sufficient accuracy. The range of processes that can fall into this category is only limited by data availability, the skills of the data science team and the capacity to integrate such systems.
A rich source of data is frequently-ignored metadata. For example, systems that monitor mouse
movements and keystrokes and identify potential deceit based on the way a client completes an online form are available now.
This often-overlooked tool can provide early warning if there is a variation in normal trends. For example, a sudden, non-seasonal surge in refund claims from a particular region may indicate the emergence of fraudulent behaviour.
Tools that can automatically monitor trend data at global and more granular levels are readily available and generate alerts when tolerances are breached. While some tools visualise the trend variation on a dashboard, the best tools also generate alerts automatically and do not rely on someone spotting a problem manually or even loading a dashboard.
An integrated, end-to-end, fraud detection and mitigation system may well consist of all or a number of these solutions and usually requires a level of integration with processing platforms. Fortunately, current solutions (eg containers) simplify the challenge.
Fraud Mitigation Framework
Most government agencies and financial institutions collect and maintain large volumes of data in support of their operations. Making optimal use of these data collections underpins the ability to identify and prevent fraud.
Data-driven decision-making relies on:
- being able to collect and see information (data);
- understanding the information and data;
- responding with appropriate counter-measures,
- monitoring/evaluating the effectiveness of these measures; and
- adjusting the system based on the continuous analysis.
Seeing Information/data – if it’s invisible, it is difficult to defeat
The ability to collect and store information and data for downstream processing within required timeframes is a fundamental building block to any fraud-mitigation process. Most organisations collect process data such as applications and claims. Most would also store the results of such processes (eg refused application/claim, approved application/claim).
An organisation that records incidents of identified malpractice in such applications and claims creates a powerful anti-fraud dataset.
Most data systems tend to collect vast volumes of meta-data like system logs. Much of this resource is generally stored and not effectively uses to detect fraud. Tools that collect transaction metadata (eg mouse movements, keystrokes) and feed artificial intelligence that can accurately predict potentially fraudulent intent.
Capturing contextual information for analysis provides additional attributes that will enhance identified fraud but may also provide valuable intelligence around existing but undetected fraud.
Understanding – ‘why’, ‘how’, ‘when’, ‘where’ and ‘what’ happened
Analysis of data and intelligence can reveal how the various fraudulent techniques work. Generally, this relies on a team of subject matter experts working with data science teams to develop deep insights.
Responding – see when suspicious things are happening and stop them
Once the fraudulent techniques are understood, a data science team can build predictive analytics models to detect the adverse patterns in the data to flag similar patterns associated with current (live) transactions. Such models can manage hundreds of variables in close to real-time and identify problematic behaviour with a known level of accuracy and work in close to real-time.
There are many ways of using this process to respond to potential malpractice. One simple example is:
- Applications/claims that are identified as low risk by our risk systems can be expedited. This reduces the cost of processing and increases client satisfaction.
- Applications/claims that are identified as high-risk could be diverted to a process that enables more data collection and/or greater scrutiny.
Monitoring – are countermeasures working?
Once a fraud detection system has been deployed the world will have changed. Eventually, criminals will adjust their approaches and possibly develop new methodologies.
Automated monitoring of an analytics-based system is always desirable as it can detect when expected accuracy or other performance is no longer being achieved. There are many reasons why this will occur but one of them is that criminals have developed new techniques and workarounds.
Monitoring the performance of the analytics-based system and, importantly, collecting and analysing intelligence can close much of this gap.
Adjusting – respond quickly to changed circumstances
The final part of the process closes the loop – lessons learnt through the monitoring processes is fed back into the next version of the system to refresh predictive models and other components.
Why this process?
This process leverages data and intelligence, supports continuous improvement and a capacity to respond to changed circumstances. Importantly, the process maximises the capacity to apply the most appropriate measures to mitigate fraud. In many cases, a response is based only on the detection of a problem. The analysis of the problem provides insights into the method of operation in this case. Once this is understood, an analysis of current data may indicate if this is an isolated case or if more such cases have remained hidden.
Moreover, it ensures that any countermeasures target the real problem. If the problem is potentially widespread, then the effort to build a data-driven model to detect other such cases and a predictive model to identify similar cases in future transactions is warranted. Automated monitoring and feedback loops provide a level of assurance that our solution is still doing what is expected.
With the aid of technology, law enforcement officials from all over the world come together to join in the fight against cybercrime. The interaction on combating digital crime was organised by Taiwan, Australia, Japan, the U.S. and Slovakia under the Global Cooperation and Training Framework (GCTF) from Taipei City.
Established in 2015, GCTF was established to provide a platform through which Taiwan could contribute to global problem solving and share its expertise with partners across the region. The initiative has taken up various topics in the past years, from rights of Persons with Disabilities (PWDs) to energy efficiency.
This time around GCTF dealt with a pressing issue affecting economies across the world: cybercrime. Jointly organised by the Ministry of Foreign Affairs (MOFA), the Ministry of Justice Investigation Bureau (MOJ), American Institute in Taiwan, Australian Office Taipei, Japan-Taiwan Exchange Association and Slovak Economic and Cultural Office in Taipei (SECOT), the focus was on cyber-enabled and crypto threats.
Nearly 300 law enforcement officials from 32 countries took part in the workshop. The talks focused on best practices to counter cyber financial crimes, share experience and thus build capacity. In keeping with the virtual event, MOJ Minister Tsai Ching-hsiang, MJIB Director-General Wang Chun-li and SECOT Representative Martin Podstavek all delivered pre-recorded opening speeches.
#Taiwan, #US, #Japan, #Australia & #Slovakia are cooperating on combating digital crime. The forces for good staged a #GCTF aimed at beefing up the response to cyber-enabled & #Crypto threats. Our thanks to the near 300 officials from 32 countries for the superb event!
– Ministry of Foreign Affairs official tweet on the GCTF event
Over the years, online-related crimes have grown. The platform that allows people to broadcast from just about anywhere has been taken advantage of by unscrupulous individuals with malicious intent to harm or divest finances. Some experts cast a gloom-and-doom scenario caused by digital malfeasance.
The good news is emerging technologies in ICT can be utilised in the fight against cybercrime. Some of the advantages that technology can bring to organisations all over the planet are:
- With Artificial Intelligence (AI), systems can be developed to search for security flaws and deploy solutions in real-time.
- AI can help cybersecurity analysts to detect and analyse high risks incidents, and to investigate threats.
- Machine learning and artificial intelligence can segregate networks to isolate assets or redirect attackers away from vulnerabilities or valuable data.
Taiwan is definitely taking the lead when it comes to digital transformation. It’s leaving no stone unturned, a phrase its leader used to show how dedicated the country is to achieving net-zero capacity by 2050. Truly, the island nation is approaching digitisation in big strides.
Just recently, its Southern push has reaffirmed Taiwan’s desire for digitisation. It’s all looking good. Not only is it pushing for next-gen computing in quantum but also it’s advocating green technology.
With the growing demand for digital public services during the COVID-19 pandemic, ‘customers’ expectations have escalated. Businesses must rethink and devise an effective strategy to provide their products and services through customer-centric digital offerings.
OpenGov Asia had the opportunity to speak with Monica Hovsepian, Head of Financial Services Industry at OpenText, to gain her insights on how businesses live up to increasing customers’ expectations and deliver customer-centric services.
In her role, Monica is responsible for the financial services industry strategy and marketing globally at OpenText across all business units. Monica has over 25 years of financial industry experience. She is a trusted subject matter expert in the Financial Services Industry, having worked with numerous large and international banks in North America, Europe, and Asia.
As an Information Management company, OpenText provides software and services that empower digital businesses of all sizes to become more intelligent, secure and connected.
Higher Expectations Require New approach
Undoubtedly, culture and paradigms have shifted; people do things dramatically differently and have higher expectations. Customers want services to be predictive instead of having to chase the services. The question is: what are the ‘industry’s requirements and how will businesses approach the higher customers’ expectations?
Monica starts by looking back at the moment when the pandemic erupted, and the governments around the world were announcing lockdowns or heavy movement restrictions. The old way of living life had changed forever at that instant and will likely never go fully back to what it was. People had to engage almost entirely digitally. Homes, kitchens, dining and living rooms became workspaces and schools, and a screen doubled as a whiteboard, office and grocery store.
In the past, many people did not use many applications such as video calls, but it has become mainstream now. People have also started engaging with certain brands and always expect seamless transactions. The constant digital engagement creates new vocabulary that had not existed or was not too common before. This is one of the manifestations of how digitalisation has created a massive culture change.
In this global realignment, Monica agrees that Singapore is an emerging market that is more digitally advanced than most countries, including many places in North America.
Monica emphasises that there is a digital gap in certain industries – there are the digital doers and the digital fakers. Some of the digital fakers dissipate and are not around anymore. However, the ones that are still in business are those who realise they need to get their business in gear. There is no more room for those digital fakers as today’s customers are demanding fast, intuitive, seamless and personalised high-quality services.
Customers learn about what is possible from other applications and engagement and compare the services that do not live up to those ideals. They are willing to pay more to companies that provide faster service, but as citizens, they are unwilling to pay for faster government services.
Customer Experience is Priority
Top companies can deliver seamless services because they have the money and resources. However, the public sector and other mid-size organisations have finite resources with finite team sizes, yet they have to innovate and become more creative. There is also a massive gap between North America and South-East Asia. Given the new challenges, how do organisations build their digital strategy?
Monica firmly believes the reason why top companies are doing great is that they put customers in the centre.
They build their strategy with that ideology and constantly tweak their plans. They consistently monitor and make changes to their front end. There might be minor changes, but they are continually updating it. They make the user interface and engagement easier and better and keep testing it – in the final assessment, It’s not only about the product but also about engagement.
On the other hand, in other industries, it is all about the products. So if the methodology is changed, things would move forward better. In the past, in financial services, it was also about the products. Nowadays, successful companies are focusing on the customers and putting them in the centre. However, while technology provides organisations with agility, they must ensure a customer-centric mind shift.
How OpenText Helps Customers to Create Seamless Experience
Creating a seamless experience comes with the challenge of integrating existing platforms as well as legacy systems and legacy tech. There is no one-size-fits-all solution, Monica warns, saying that no two customers are going to do things the same way when it comes to a transformational project. Every customer is on a different journey path and has different requirements.
OpenText strives by understanding where the customers are on their path, what their key objectives are and what challenges they face. OpenText helps customers to do discovery through which they determine and execute the transformation and modernization.
Customers create zettabytes of information and organisations want to be able to consume this information to serve the customers properly. As an information management company, OpenText has various methods to integrate existing platforms. They have several ways of consuming the information and ensuring that none is ever lost and archiving what needs to be to be retained.
Monica offers the analogy of two people with the same characteristics and traits but who are actually on the opposite end of the spectrum of individuals. Organisations would need a lot more data and information to be able to cater to people with similar characteristics but have opposite personalities, likes, and traits. Similar people, with varying tastes and preferences, are out there by the millions so organisations need to better understand customers and differentiate services accordingly.
When it comes to talent, Monica explains that there is a war for professionals. This is because, from a young age, people do not see working for governments or financial services as having interesting projects to be involved in.
Currently, a lot of government agencies and financial services are investing heavily in fascinating projects from Artificial Intelligence (AI) to cybersecurity. Moreover, increased digitalisation has given birth to a rise in cyberattacks across all industries. In the current situation, there is a paucity of talent in both aspects – creativity and security.
One of ‘OpenText’s success stories is MSIG Asia – an international insurance company that has built omnichannel self-service capabilities and grown its business with ‘OpenText’s information management platform.
Like other insurance providers, MSIG Asia faced shifting challenges, often steeped in massive amounts of data and hyper-digital expectations from customers. The pandemic added complexity as a more distributed workforce strained to keep pace with highly regulated and collaborative processes as well as a growing necessity for digitisation and self-service, internally and externally.
Pandemic conditions aside, insurance professionals need accessible insights and customers need convenient tools. To this end, digital insurance policy documents provide information via eco-friendly alternatives to paper documents that simultaneously support MSIG ‘Asia’s biodiversity goals. The accessible information underpins increasing demand for self-service opportunities and meaningful online presence, both objectives of the insurance ‘provider’s digital transformation strategy.
To support its efforts on omnichannel customer acquisition and retention, MSIG Asia implemented an information management solution from OpenText. This included OpenText Extended ECM, OpenText AppWorks and OpenText Exstream. Together these solutions help form a ‘single source of ‘truth’ and communication platform for MSIG.
“This customer-centric integration will help maximise operational efficiencies across different lines of businesses and locations, which in turn helps lower expenses and strengthens the business infrastructure. Quite essentially, this has empowered our business operations with a single system that can better enable growth and support the innovations and adaptability required to meet the fast-changing business demands for the long haul”,” said Joseph Yew, CIO, MSIG Asia.
Another OpenText customer is the Ministry of Finance Singapore. Vital, the Singapore Government’s centre for shared services, appointed OpenText – through a public tender process – to digitise its back-office corporate services, comprising over two million records per year for over 100 Government agencies.
In its effort to aggregate common administrative services and benefit from economies of scale, the Electronic Document and Knowledge Management System (eDKMS) will enable Vital to integrate daily HR, payroll and finance workflows for higher productivity, as well as foster greater knowledge management and a social collaboration platform within Vital. The system reduces paperwork and manages the flow of information from capture through to archiving and disposal.
As a part of Singapore’s drive to build a Smart Nation and a digital government, Vital is taking its steps to reduce paper-intensive workflows in back-office operations, improve records and case management, and enhance business information analysis and decision-making.
OpenText Content Suite enabled Vital to remain compliant with government-mandated document management policies, improved records and case management accountability and enhance business information analysis and decision-making. Using OpenText, the eDKMS provided Vital with the tools to reduce paper-intensive workflows and deliver timely information sharing for improved collaboration.
The Future of Customer-Centric Operations
Monica states that human beings are creatures of habit. People have gotten used to working from home and have shown their employers that they can be trusted. The fact is people ended up working much longer hours as a result it was a benefit to the employers as revenue actually went up.
From her own experience, Monica has observed that the trust question about working from home has gone away. On the other hand, organisations need to ensure employees want to stay otherwise they will leave. In fact, the Great Resignation is happening right now. As the pandemic has been a huge stress for people, employees may well leave if they are mandated to work fully at the office again.
Employees are looking for a better employee experience and deeper engagement. Organisations cannot have a good customer experience if they do not deliver a good employee experience as both go hand in hand. Organisations need to deliver the same kind of customer experiences to their employees so they can deliver the organisations’ vision and mission. Organisations must provide the tools for employees to be able to serve customers and give employees the same digital experience as well.
“We should be digitising the humans and humanising the digital. It has to go hand in hand now,” Monica emphasises.
In Monica’s opinion, much investment will go towards platforms in the future. Organisations are looking for a single platform. Organisations realised the issues they suffered in maintaining all the investments they had made in niche technology and hence they are now considering moving to single platforms. Simplification of the architecture in a single platform is the decision to move forwards.
Cloud technology is being embraced wholeheartedly as well. A lot of investment was done in cloud technology when the pandemic initially started and continues.
However, Monica reiterates the importance of putting customers at the centre as they will leave any businesses that do not live up to their expectations quickly. Customers across generations are looking for organisations that can provide them with very specific wants, needs and beliefs.
Why Partner with OpenText
OpenText is an Information Company that enables organisations to gain insight through market-leading information management solutions, on-premises or in the cloud. OpenText believes that information and knowledge make business and people better. Its mission is to deliver compelling innovation that provides the customers with a competitive advantage. Its strategy to deliver information management in the cloud at scale to power digital businesses of all sizes
Manage information end-to-end:
- Master Modern Work: Master collaboration while reducing security and governance risk by enriching business processes with content, insights and automation.
- Digitise the Supply Chain: Integrate systems, people and things, enabling businesses to seamlessly exchange information with their trading partners to accelerate productivity.
- Power Modern Experiences: Power customer interactions with engaging experiences across the entire customer journey, from acquisition to retention.
- Be Cyber Resilient: Protect and secure data and mitigate risk with best-in-class technologies and personnel to grow securely.
- Build the API Economy: Build, extend, and customise applications faster and smarter using a collection of Information Management API services in the cloud.
Monica feels that the beauty of OpenText is the fact that it provides everything for the entire lifecycle of customers. OpenText can help customers with all they need, including marketing, services, onboarding customers, cybersecurity, risks and, compliance. We are one organization, offering integrated products and services, on the cloud or on-premise, we are committed to our customer’s success. OpenText is the Information Management company and the future is ensuring a single-pane view of the customer’s and the business’ information.
OpenText wants to partner with customers so they can grow together. She is firmly convinced that customers can truly flourish with OpenText.
In the new normal, people are no longer merely transacting but looking for a better, smoother, consistent, constant experience. With the vast array of options, retaining a client or keeping a citizen happy is increasingly dependent on the experience; the quality of their overall interaction.
Strong data governance practices and technologies enable organisations to collect, use, share and protect data conscientiously and intentionally, limiting potential risks and bolstering trust. The Adobe Experience Platform ensures that data, content and insights are accessible to experience-delivery systems to act upon in real-time, producing captivating interactions at all times.
It ensures that data, content and insights are accessible to experience-delivery systems to act upon in real-time, producing captivating interactions at all times. The platform’s governance capabilities provide a structure that controls and regulates data, as organisations endeavour to produce the ideal experiences through its open and extensible platform.
Read more if you are looking to have a powerful, proactive engagement with your clients and gain and maintain their trust.
The COVID-19 pandemic has brought unprecedented changes and challenges for Financial Services. More than ever now, people are reliant on digital payments and new payment technologies for transactions. And keeping up with the trend, fraudsters are continuously finding creative ways to perpetuate their scams. Financial services must adopt more effective fraud detection and prevention strategies.
OpenGov Asia had the opportunity to speak with Ian Holmes, Global Lead for Enterprise Fraud Solutions, Director, SAS to gain his insights on fraud detection and prevention strategies in financial services.
Having joined SAS in 2011, Ian’s global role is to provide fraud expertise to drive the product enhancement; pre-sales and business implementation of the banking fraud solution globally.
Ian’s fraud expertise is pivotal in retaining SAS’ recognition and reputation in the industry by customers and analysts; supporting marketing initiatives and strategic collaboration with key third party vendors. His career developed from the ‘ground up’ to embrace a customer focussed approach alongside analytics and strategy.
Why is There a Rise in Scams?
As people are staying and working at home, it provides a prime opportunity for fraudsters to infiltrate people’s lives remotely. Free time and the promise of additional income is a great opportunity to gain the attention of the new victim. Scams probably represent one of the most diverse ranges of modus operandi, in some way they involve a deceived individual.
From a payments fraud perspective, such scams are considered within the area of banking fraud. Most typically in today’s world, this is achieved through remote activities by the fraudster offered by digital devices. The spike in malware and phishing initiated through emails, phones and even SMS also expose us to the risks of fraud.
As recently as 2017, UK Finance, the industry body which reports on all banking fraud types, introduced a new measure covering Authorised Push Payments. This category specifically cites the genuine account holder being involved in the payment. In this category, both volume and value are increasing dramatically.
Acknowledging that genuine customers are involved, banks are having to invest more in the prevention of this type of fraud. At a minimum, this can be due to reputational risk. However, in the UK, the Payment System Regulator introduced a voluntary code. The Contingent Reimbursement Model, which has now been effective for a year, makes Payment Service Provider (PSP) liable under certain circumstances. This has increased the cost of doing business for a bank – meaning a capable fraud detection solution is going to be key.
What Banks Need to Do to Combat Fraud
Ian explains that independent of the reasons, the fraudsters are certainly on the winning team and losses are escalating. Organisations need to manage risks during the movement from traditional payment methods to the new digital options. Additional channel data that the devices of today offer along with advanced analytics can be very beneficial to overcoming digital fraud and financial crime.
Financial institutions (FIs) need to understand all payment entry points. Protecting these entry points from digital fraud can be quite complicated and tedious. The critical first steps are to start processing all data streams in real-time and to combine identity and transaction monitoring to not only identify fraud as it occurs but to prevent it even before it takes place.
The financial services industry as a whole needs to boost the utilisation of available Artificial Intelligence and machine learning technologies. Since the start of the pandemic, financial institutions have tirelessly innovated to meet customers’ needs for flexibility and immediacy. Now, with SAS’ proven technology and expertise, they must redefine how they protect themselves and their customers from the associated risks.
Ian believes that in the digital environment we live in today, the use of channel data will be pivotal for real-time analytics and automated activities within businesses. The level of needs may vary from industry to industry but they will be important to all businesses to counter fraud and make insightful strategic decisions.
Customers now expect digital services with seamless interaction. Financial services need to be meticulous in their fraud prevention while reducing false positives at the same time to improve customer experience. The number of scams is extensive, examples include romance and investment scams. The typical processes which protect other fraud types are undermined by the fact that authentication and verification of payments from a scam perspective will be validated by the deceived customer when prompted. This is very different from the immediate feedback which would come from a situation where the customer had no awareness of the activity.
- Anomaly detection and beneficiary profiling are key real-time capabilities. For example, Advanced Analytical scoring and link analysis of the beneficiary can be key indicators – mule account propensity models have been used and there is no reason why this principle cannot be expanded to identify accounts used by fraudsters
- Industry initiatives such as the beneficiary validation services. This is currently being offered in the UK and allows checking of the account details and validating these this gives confidence from the consortium perspective.
- When confirming suspicious payments with customers then alternative validation is required.
- Transaction level authentication will help to exclude specific third party fraud scenarios – this is especially true for commercial payment where multiple levels of authorisation should be included.
Increasing Customer Experience
For an overall better and more secure customer experience, digital fraud management requires an approach with a faster response to new threats to reduce false positives. Using this approach, businesses would be making faster, better informed risk-based decisions across the entire organisation. Moreover, an end-to-end fraud detection and prevention solution supports multiple channels and lines of business, enabling enterprise-wide monitoring from a single platform.
Such a solution simplifies data integration and enables FIs to combine all internal, external and third-party data to create a better predictive model tuned to the organisation’s needs. Bringing together this data on a single technology platform gives the flexibility to scale up or out as the business changes, and respond faster to new threats as they arise.
Data analytics and machine learning solutions can enable the monitoring of payments as well as non-monetary transactions and also events, thus enabling businesses to identify and respond to unwanted and suspicious behaviour in real-time. Embedded machine learning methods detect and adapt to changing behaviour patterns, resulting in more effective, robust models.
Key technology components let banks easily spot anomalies for each customer. In-memory processing delivers high-throughput, low-latency response times (even in high-volume environments) – enabling FIs to score 100% of transactions in real-time. Data without analytics is intelligence not realised and monetised, which means businesses are unable to operate at their optimum capacity.
Thus, organisations must understand the value and significance of data analytics in this fast-paced digital world. Organisations that want to survive in today’s competitive market need to build the right infrastructure and adopt the right practices across their infrastructure.
Key Trends in Financial Services
Payments fraud has historically been a third party fraud issue. Although the use of machine learning, rules and operations processes are still valid, scams require a different view of payment account activity than is typical with a bank’s processes. Because of this, there is often a complete review required of the risk management framework in place at banks and how it is applied to fraud detection.
Without technological and operational improvements, the global rise of digital fraud will surpass the losses associated with counterfeiting magnetic stripe payment cards. A and SAS suggests this digital shift is also fuelling a multibillion-dollar fraud surge worldwide.
These are key trends that Ian is seeing in the market:
- Digital payments present an escalating global risk.
- Though prevalent payment technologies vary by region, fraud trends have significant commonalities across geographies.
- This indicates that criminals coordinate and share information more openly than do FIs, giving them a significant advantage in thwarting fraud controls.
- Cross-border fraud is increasingly common.
- Digital fraud is increasing in frequency and sophistication.
- Fraudsters and criminal networks’ arsenal of tricks are becoming as advanced as the technologies used to detect their activities.
- Social engineering, phishing and identity schemes, and the breadth of digital payment methods are shifting the odds in the bad guys’ favour.
- Layered technology and analytic capabilities are needed to identify overlapping threats in real-time.
- The complexity of criminals’ attack vectors demands a layered approach to preventing and detecting fraud, while also providing a means to orchestrate strategies and investigation activities.
- Automated actions and predictive case management powered by AI and machine learning can help reduce reliance on human resources.
- Data is critical.
- Using data for real-time analytics and automated actions will be crucial to thriving in this new digital normal.
Big Data Analytics in Financial Industry
Digital fraud is increasing in frequency and sophistication. Fraudsters and criminal networks’ arsenal of tricks are becoming as advanced as the technologies used to detect their activities. Social engineering, phishing and identity schemes and the breadth of digital payment methods are shifting the odds in the bad guys’ favour.
Organisations should be aware that new payment mechanisms are especially targeted due to ineffective risk mitigation controls at launch. Layered technology and analytic capabilities are needed to identify overlapping threats in real-time. The complexity of criminals’ attack vectors demands a layered approach to preventing and detecting fraud, while also providing a means to orchestrate strategies and investigation activities.
Automated actions and predictive case management powered by AI and machine learning can help reduce reliance on human resources. Data is critical. Using data for real-time analytics and automated actions will be crucial to thriving in this new digital normal.
Capabilities will vary based on technological maturity, but organisations at all stages have a common need for as much real-time data as possible to make effective decisions. Importantly, deploying cloud infrastructure for fraud management systems boosts data ingestion capabilities.
How can Banks Manage Fraud in CryptoCurrency?
Crypto is currently ungoverned and uncontrolled, it is ripe for abuse. And therefore Fraud and Financial Crime is always going levitate towards any such exposure. Ian has seen in this region the combination of the proceeds of fraud and money laundering being converted to cryptocurrencies offer an alternative to the principles of Money Laundering and the underlying placement, layering and integration necessary to support this criminal activity.
Crypto and the underlying Distributed Ledger technology, undoubtedly brings a great deal of additional security in principle but always it’s the endpoints and the consumer who are the weakest link. From a fraud perspective, Ian does not think the fraudsters are needing to be too innovative currently, they are focussed on committing Account Takeover (ATO) and hacking to steal.
Ian believes that SAS is creating the future of our interactions with data, analytics and AI – delivering an innovative system for a more intelligent, responsible and safer world.
To transform a world of data into a world of intelligence, organisations need to empower and inspire everyone with the most trusted analytics. SAS do this by bringing the capability to curiosity so people and organisations can drive progress – making better decisions and improving lives.
- Better Together – SAS develops strategic partnerships, open integration and drive collaboration in communities and industry. Through developing leading innovations together, SAS creates positive change.
- For Everyone – SAS brings together creators and consumers and give access to data and insights through analytics that adapt to people. SAS does this through simplifying model development, providing end-to-end analytics capabilities and driving responsible and transparent AI.
- Everywhere – SAS’s cloud-native platform drives digital transformation, unleashing value from data analytics wherever it resides, from the cloud, on-premise or coming off the edge in sensors and devices.
How SAS Determines the Best Strategy for Effective Fraud Detection
- Security by design
This is not a bolt-on option to a financial instrument, it’s the core battleground for consumer adoption and recommendation – 83% of millennials would change the bank account for better CX. Security is not locking the doors, nor is it raising the bar so high you inflict overdue pressures on your good clients and operational staff. It is using all the intelligence available to ensure you can correctly identify risk, and seamlessly move to the correct challenge response – visiting the branch with physical ID is not an option in COVID-19 times, but even before, you are likely to lose a customer.
While SAS controls the use of personal data FI’s should all be very aware of the provisions for fraud detection and maximise the use and analytics using relevant data. SAS can ingest and process in real-time both internal and external data which can all add to understanding the traits of normality or not. Establish clear fraud definitions and continuously question the value of data and the quality of AI decisions.
You need to orchestrate and adapt using external providers which add value to the ability to trust an interaction, analytically led examination understanding the value both positive and negative allow for better decisions for your business and customers, given the ability to leverage various data sources and change both a waterfall of events and enrich certain decision points is the right way to make those decisions.
- Consortium Intelligence
Use the intelligence from wider networks, share and syndicate fraud reporting. Experience is a competitive issue – fraud needs a concerted intelligence sharing on a real-time basis, use systems such as innovation which enables sharing with minimum personal data facilitating a common language for the exchange of fraud data
To handle digital fraud, businesses need more than just standard analytics. They need to implement adaptive techniques including AI and machine learning, supervised machine learning, unsupervised machine learning, network analysis and text analysis. All of these technologies form a powerful force for improving both the accuracy and efficiency of fraud detection. It only makes sense to bring fraud, Anti Money Laundering (AML) and cyber functions together.
Here are key considerations of a strategy for an effective defence using analytics:
- Converge fraud and AML programmes. Centralise insights from multiple sources, including cyber-event data, for more complete customer risk assessments in a broader context.
- Establish consistent business processes. Intuitive workflow and case management support more efficient investigations, faster resolutions, fewer false positives and higher productivity.
- Reduce false positives. Advanced analytics and machine learning can reduce such anomalies so investigative analyses can focus on the cases that pose the most risk to the organisation.
- Intelligently prioritise alerts for triage, investigation and disposition. Advanced analytics can let defenders quickly see areas of interest and where to focus first.
- Leverage interactive visualisations. Investigations can be more targeted and conducted more efficiently through the use of interactive graphics and tables. Import, search, filter and visualise the results in different ways to reveal patterns, people and events hidden in complex data.
- Easy report generation. Findings can be documented with screen captures, analyst notes and images and advanced reporting that presents data to stakeholders and decision-makers much more impactful.
The Future of Fraud Prevention and Management
The pandemic has opened Pandora’s box of global fraud. Based on a recent survey (Association of Certified Fraud Examiners (ACFE) and SAS) responses from nearly 900 ACFE members worldwide, the 2022 Anti-Fraud Technology Benchmarking Report illuminates how organisations across sectors are using technology to fight fraud.
More than 40% of respondents reported accelerating their use of data analytics significantly (14%) or slightly (29%) amid the pandemic. The majority (60%) expect their anti-fraud tech budgets to grow over the next two years. Advanced analytics topped the investment list, particularly artificial intelligence (AI) and machine learning (cited by 26% of respondents) followed by predictive analytics/modelling (22%).
Analytics is an indispensable fraud detection tool. When asked about their use of analytics, nearly all survey participants indicated their organisation’s use of data analytics was beneficial in helping them:
- Boost the volume of transactions reviewed or suspected fraud cases identified (99%);
- Timeliness of their anomaly detection (98%);
- Efficiency in automating time-consuming tasks (98%); and
- Overall accuracy in reducing false-positive rates (97%).
Data-sharing consortiums are gaining momentum. Internal structured data sources remain the crux of most organisations’ anti-fraud analytics initiatives (cited by 80% of respondents), but many are also tapping a variety of external data sources, including:
- Public records (41%),
- Law enforcement or government watch lists (31%),
- Social media (29%),
- Other third-party data (25%), and data from connected devices (25%).
Organisations are using a variety of emerging technologies to fight fraud. The report highlights the growing use of technologies like physical and behavioural biometrics, computer vision analysis, robotic process automation (RPA), blockchain, and virtual and augmented reality.
Current use of these technologies ranges from:
- 7% (virtual/augmented reality) to
- 34% (physical biometrics) of surveyed organisations.
- Among respondents from organisations not using particular emerging technology,
- 13% (virtual/augmented reality) to
- 19% (RPA) expect to deploy it within the next one to two years.
Technology and SAS Cloud/SaaS is making deployments quicker and easier. Cloud capability is making updates and new functions available. Digital data such as identity, biometrics and user behaviours are becoming key. Although the level of data is massive, it’s all for the great good and is helping to make our lives safer.
Data is increasingly at the core of any business or organisation and is a critical raw material for intelligent analytics and the driving force behind digital transformation. Most organisations are dealing with massive amounts of data in various formats, types and across numerous systems. Their challenge is to turn that data into insights that are useful for complex decision making.
Graph technology is being seen as foundational to data management and analytics, empowering user collaboration and fostering data democratisation. With a vast amount of data, organisations need to bring a deeper layer to give them the competitive edge, insights, and knowledge. In the new normal, exploiting data and using it confidently for complex, intelligent decision-making is vital. Neo4j knowledge graphs can help as an insight for available data, enriched with semantics and revealing its complex interconnectedness.
OpenGov Asia had the opportunity to speak with Dr Maya Natarajan, Senior Director, Knowledge Graphs, Neo4j to gain her insights on how organisations should utilise knowledge graphs for complex decision-making.
Maya is responsible for the go-to-market strategy for knowledge graphs at Neo4j. She is passionate about bringing different technologies together to solve complex problems and is championing the use of knowledge graphs to bring context to various systems.
She has positioned technologies from Blockchain to Predictive & User-Based Analytics to Machine Learning to Deep Learning to Search to BPM and beyond in a myriad of industries including Life Sciences, Financial Services, Supply Chain, Manufacturing, etc at various small and large companies. Maya started her career in the biotechnology area where she was in R&D focusing on cardiovascular drugs, and she has five patents to her name.
Why Knowledge Graphs are Better than Traditional Data Tools
The obvious place to start would be why should organisations move from traditional data representation and tools to knowledge graphs for complex decision-making.
Data volumes are consistently increasing – from about 40-50 zettabytes in 2019 to around 60 zettabytes in 2020 and approximately 75 zettabytes in 2021. Maya explained that as data volumes grow, organisations need to find new ways to use the massive amount of information to drive business value. Traditional analytics are no longer suitable for complex business operations and analysis.
Traditional tools based on relational databases have existed for over 40 years and relational databases are one of the most popular query tools across businesses. Traditional analytics are suitable for transactional and straightforward data that fit easily into a relational database’s format of tables and columns.
On the other hand, graph technology focuses on the relationships between data and considers the relationship between data to be just as significant as the data itself. In OpenGov Asia’s article with Nik Vora, Vice President, Asia-Pacific, Neo4j explains that graph technology is important because it can extract the inherent value in the data itself. The purpose of the technology is to store information without restricting it to a pre-defined model.
Graph technology is the ‘most obvious approach’ to look at connections as the value of relationships itself is the underlying drive for this technology. Maya emphasises that relationships among data can be harnessed to find known and unknown patterns in data that are not identified or analysed through traditional means. What relationships bring to the table is they add dynamic context to data.
It is important, Maya says when talking about a knowledge graph, that it is defined first. A knowledge graph is an insight layer of interconnected data enriched with semantics. A knowledge graph gets richer as new data is added. Through a combination of data, graphs and semantics (meaning), organisations get a knowledge graph with deep and dynamic context.
Maya gave an example of the pharmaceutical industry to illustrate how knowledge graphs work. A pharmaceutical company will know how to get drugs in a particular therapeutic area to market – the domain knowledge that the particular pharmaceutical company has in this area is very specific and proprietary.
Knowledge graphs have three components: data, graph and semantics. Relationships are stored along with the data in a graph database, and they are important as they provide the first level of context to data. In a knowledge graph, the pharmaceutical company’s domain knowledge can be viewed as its semantics and is key as it adds a second layer of context to data. Deep dynamic context makes knowledge graphs the top choice of use for cases that require complex decision-making as context is the prerequisite to complex decisions.
Industries from supply chain to financial services to life sciences and beyond currently require complex decision-making. Hence, knowledge graphs have become the most popular choice for diverse cases.
Unique Benefits of Knowledge Graphs
Maya believes that knowledge graphs are immensely useful for organisations to solve their business challenges. Specifically, organisations should enhance their toolkit and adopt a Neo4j knowledge graph as it has two distinct benefits that other tools do not possess.
First, Maya reiterates, that semantics is one of the key components and advantages of knowledge graphs. Semantics are encoded alongside the data in the graph itself. This is how knowledge graphs drive intelligence into data and significantly enhance its value. Essentially, knowledge graphs increase the value of data through semantics by adding more context.
The second benefit is of knowledge graphs can make incumbent technologies better by providing better data management, better predictions and better innovations. Partly because knowledge graphs fuel machine learning and they can be adopted well to a variety of use cases.
How Neo4j Tailors Specific Solutions to Different Business Challenges
Knowledge graphs ease the complex process because they add or imbue intelligence to every stage of the data. However, each organisation has different business challenges and context – including its digital strategies, clients and outcomes. This begs the question: how does Neo4j tailor its solution to generate value for each organisation’s unique circumstances?
Maya explains that every organisation is identified by its domain knowledge. Knowledge graphs explicitly take domain information into account in the form of semantics. By utilising knowledge graphs, Neo4j tailors the solution for each organisation according to its domain knowledge.
She illustrates this point by sharing the example of a large global pharmaceutical company – one of Neo4j’s clients – who use knowledge graphs for analysing patient journeys. A patient journey is described as a patient experience throughout an entire episode of care, starting from the admission of the patients to their discharge.
The large global pharmaceutical company recognises that no two patient journeys are exactly the same, but they want to find places where they could improve the outcomes of patients. Complex diseases develop over years, so the company would like to intervene faster and earlier during the patients’ journey to improve outcomes. They feel they could do this by finding similarities between patients.
Using a combination of a Neo4j knowledge graph, graph algorithms and machine learning, this large pharmaceutical company identified journey archetypes and journey patterns and used those as influential touchpoints to intervene at the earliest moment in a patient journey to make the most impact. In this case, it is a knowledge graph that allowed them to customise this solution.
Below is the visualisation of a single patient and journey through their disease progression.
Every blue dot represents a medical claim, every red dot represents a diagnosis, and every green dot represents a prescription. When laying out this data from left to right, it became real; this data became humanised and patterns emerged. In this example, the green (prescription) dot is followed by another condition or diagnosis, after which the physician pivoted to a new prescription in response to the diagnosis that happened after the first prescription.
These kinds of patterns were exactly what this large pharmaceutical company was trying to understand within the patients, how physicians treated patients and whether their products would help these patients. In many cases, it would yield a better patient outcome. This individual visualisation became an anchoring point; it became a very different way to analyse data. The Neo4j knowledge graph helped facilitate these analyses rather rapidly.
Combining Knowledge Graphs and Artificial Intelligence
Maya agrees that the combination of knowledge graphs and Artificial Intelligence (AI) is a platform on steroids. Companies are increasingly using AI applications for decision-making. Due to a lack of contextual information, AI systems have not been able to achieve their full potential as reliable solutions for complex problems.
This is where knowledge graphs come in. They offer a logical way to capture data relationships and convey their meaning. Knowledge graphs embed intelligence into the data itself and offer AI the tools to make sense of it all – to be more explainable, accurate and repeatable. THE FUTURE OF AI: Machine Learning and Knowledge Graphs is suitable for forward-thinking organisations that are keenly aware of the power their data represents and who understand that its proper use empowers intelligent decision-making.
Recently, both knowledge graphs and AI have joined forces. The powerful combination of the two has spurred the interest in using both technologies. AI/machine learning benefits from knowledge graphs as knowledge graphs provide context in two different ways: First, knowledge graphs give data context by the addition of semantics. Second, relationships between data provide another level of context.
With knowledge graphs, data scientists get to more data in the form of relationships – by double-dipping on the data they already have and taking advantage of relationship data that they previously tossed out because it was too hard to process. Because it is built on graph technology, a knowledge graph captures relationships for analysis, so not only do data scientists have more data, but they also have more data variety.
“In Machine learning, the more data you have, the higher data quality is. The more data variety, the higher the accuracy,” Maya emphasises.
Versatile Use Case applications of Neo4j Knowledge Graphs
NASA uses Neo4j knowledge graphs to solve issues in future missions to space. While working on a mission to send Orion, a space shuttle, into space, they found that its uprighting system was not working correctly. Knowing that Apollo used a similar uprighting system to Orion, they were confident they could use the knowledge from the Apollo mission to correct this issue before Orion’s launch.
NASA deployed a knowledge graph to comb through millions of documents, reports, project data, lessons learned, scientific research, medical analysis, geospatial data and much more across departments. By using a Neo4j knowledge graph, they found a way to correct the uprighting system in Orion. Without the knowledge graph, the team would have spent years testing different designs. They saved two years of work and one million dollars of taxpayers’ money.
Standard Chartered Bank in Singapore utilises Neo4j knowledge graphs for risk management to proactively identify cybersecurity risks to protect the bank from cyber threats. As cyberattacks are on the rise, this is an important use case for the bank. Other financial services customers are also utilising knowledge graphs for the same reason.
These are very different projects that utilise Neo4j knowledge graphs. The beauty of knowledge graphs is that they lend themselves well to a range of areas across the data spectrum, from data management to data analytics. Hence, any organisation from various industries can adopt Neo4j knowledge graphs to derive actionable insights for complex-decision making.