When hiring, many organisations use artificial intelligence tools to scan resumes and predict job-relevant skills. Colleges and universities use AI to automatically score essays, process transcripts and review extracurricular activities to predetermine who is likely to be a good student. In response to claims of unfairness and bias in tools used in hiring, college admissions, predictive policing, health interventions, and more, the University of Minnesota recently developed a new set of auditing guidelines for AI tools.
The auditing guidelines, published in the American Psychologist, were developed by Richard Landers, associate professor of psychology at the University of Minnesota, and Tara Behrend from Purdue University. They apply a century’s worth of research and professional standards for measuring personal characteristics by psychology and education researchers to ensure the fairness of AI.
The researchers developed guidelines for AI auditing by first considering the ideas of fairness and bias through three major lenses of focus:
- How individuals decide if a decision was fair and unbiased
- How societal, legal, ethical and moral standards present fairness and bias
- How individual technical domains—like computer science, statistics and psychology—define fairness and bias internally
Using these lenses, the researchers presented psychological audits as a standardised approach for evaluating the fairness and bias of AI systems that make predictions about humans across high-stakes application areas, such as hiring and college admissions.
There are twelve components to the auditing framework across three categories that include:
- Components related to the creation of, processing done by and predictions created by the AI
- Components related to how the AI is used, who its decisions affect and why
- Components related to overarching challenges: the cultural context in which the AI is used, respect for the people affected by it and the scientific integrity of the research used by AI purveyors to support their claims
The researchers recommend the standards they developed to be followed both by internal auditors during the development of high-stakes predictive AI technologies, and afterwards by independent external auditors. Any system that claims to make meaningful recommendations about how people should be treated should be evaluated within this framework.
Industrial psychologists have unique expertise in the evaluation of high-stakes assessments. Their goal was to educate the developers and users of AI-based assessments about existing requirements for fairness and effectiveness and to guide the development of future policies that will protect workers and applicants.
AI models are developing so rapidly, it can be difficult to keep up with the most appropriate way to audit a particular kind of AI system. The researchers hope to develop more precise standards for specific use cases, partner with other organisations globally interested in establishing auditing as a default approach in these situations and work toward a better future with AI more broadly.
As reported by OpenGov Asia, creating smarter, more accurate systems requires a hybrid human-machine approach, according to researchers at the University of California, Irvine. In a study published this month in Proceedings of the National Academy of Sciences, they present a new mathematical model that can improve performance by combining human and algorithmic predictions and confidence scores.
To test the framework, researchers conducted an image classification experiment in which human participants and computer algorithms worked separately to correctly identify distorted pictures of animals and everyday items—chairs, bottles, bicycles, trucks. The human participants ranked their confidence in the accuracy of each image identification as low, medium or high, while the machine classifier generated a continuous score. The results showed large differences in confidence between humans and AI algorithms across images.
This interdisciplinary project was facilitated by the Irvine Initiative in AI, Law, and Society. The convergence of cognitive sciences—which are focused on understanding how humans think and behave—with computer science—in which technologies are produced—will provide further insight into how humans and machines can collaborate to build more accurate artificially intelligent systems.
The Counter Ransomware Task Force (CRTF), which was formed to bring together Singapore Government agencies from various domains to strengthen Singapore’s counter-ransomware efforts, has issued its report.
Singapore’s efforts to promote a resilient and secure cyber environment, both domestically and internationally, to combat the rising ransomware threat are guided by the recommendations in the CRTF report.
According to David Koh, Commissioner of Cybersecurity, Chief Executive of CSA and Chairman of the CRTF, ransomware poses a threat to both businesses and individuals. Economically, socially, and even in terms of national security, it can be detrimental. Both internationally and across domains, ransomware is a problem.
“It requires us to collaborate and draw on our knowledge in a variety of fields, including cybersecurity, law enforcement, and financial supervision. It also necessitates that we work with like-minded international partners to identify a common problem and develop solutions,” David explains.
He exhorts businesses and individuals to contribute as well, strengthening the nation’s overall defence against the ransomware scourge.
Cybercriminals use malicious software known as ransomware. When ransomware infects a computer or network, it either locks the system or encrypts the data on it. For the release of the data, cybercriminals demand ransom money from their victims.
A vigilant eye and security software are advised to prevent ransomware infection. Following an infection, malware victims have three options: either they can pay the ransom, attempt to remove the malware, or restart the device.
Extortion Trojans frequently employ the Remote Desktop Protocol, phishing emails, and software vulnerabilities as their attack vectors. Therefore, a ransomware attack can target both people and businesses.
The ransomware threat has significantly increased in scope and effect, and it is now a pressing issue for nations all over the world, including Singapore.
The fact that attackers operate internationally to elude justice makes it a global issue. Ransomware has created a criminal ecosystem that offers criminal services ranging from unauthorised access to targeted networks to money laundering services, all fed by illicit financial gains.
Singapore must approach the ransomware issue as a cross-border and cross-domain problem if it is to effectively combat the ransomware threat.
Other nations should adopt comparable domestic measures to coordinate their financial regulatory, law enforcement, and cybersecurity agencies to combat the ransomware issue and promote international cooperation.
Three significant results were the culmination of the CRTF’s work. For government agencies to collaborate and create anti-ransomware solutions, they first developed a comprehensive understanding of the ransomware kill chain.
Second, it examined Singapore’s stance on paying ransom to cybercriminals. Third, for the government to effectively combat ransomware, the CRTF suggested the following policies, operational plans, and capabilities under four main headings:
Pillar 1: Enhances the security of potential targets (such as government institutions, critical infrastructure, and commercial organisations, especially small and medium-sized businesses) to make it more difficult for ransomware attackers to carry out successful attacks.
Pillar 2: To lower the reward for ransomware attacks, disrupt the ransomware business model.
Pillar 3: To prevent ransomware attack victims from feeling pressured to pay the ransom, which feeds the ransomware industry, support recovery.
Pillar 4: Assemble a coordinated international strategy to combat ransomware by cooperating with international partners. Singapore should concentrate on and support efforts to promote international cooperation in three areas that have been identified by the CRTF: law enforcement, anti-money laundering measures, and discouraging ransom payments.
The appropriate government agencies will take the recommendations of the CRTF under consideration for additional research and action.
An international team led by The Chinese University of Hong Kong (CUHK)’s Faculty of Medicine (CU Medicine) has successfully developed the world’s first artificial intelligence (AI) model that can detect Alzheimer’s disease solely through fundus photographs or images of the retina. The model is more than 80% accurate after validation.
Fundus photography is widely accessible, non-invasive and cost-effective. This means that the AI model incorporated with fundus photography is expected to become an important tool for screening people at high risk of Alzheimer’s disease in the community. Details have been published in The Lancet Digital Health under the international journal The Lancet.
Limitations of Alzheimer’s disease current detection methods
In Hong Kong, 1 in 10 people aged 70 or above suffers from dementia, with more than half of those cases attributed to Alzheimer’s disease. This disease is associated with an excessive accumulation of abnormal amyloid plaque and neurofibrillary tangles in the brain, leading to the death of brain cells and resulting in progressive cognitive decline.
The Clinical Professional Consultant of the Division of Neurology in CU Medicine’s Department of Medicine and Therapeutics stated that memory complaints are common among middle-aged and elderly people, and are often considered a sign of Alzheimer’s disease.
It is sometimes difficult to make an accurate diagnosis of Alzheimer’s disease based on cognitive tests and structural brain imaging. However, methods to detect Alzheimer’s pathology, such as an amyloid-PET scan or testing of cerebrospinal fluid collected via lumber puncture, are invasive and less accessible.
To address the current clinical gap, CU Medicine has led several medical centres and institutions from Singapore, the United Kingdom and the United States to successfully develop an AI model using state-of-the-art technologies which can detect Alzheimer’s disease using fundus photographs alone.
Studying disorders of the central nervous system via the retina
The S.H. Ho Professor of Ophthalmology and Visual Sciences and Chairman of CU Medicine’s Department of Ophthalmology and Visual Sciences explained that the retina is an extension of the brain in terms of embryology, anatomy and physiology. In the entire central nervous system, only the blood vessels and nerves in the retina allow direct visualisation and analysis.
Thus, it is widely considered a window through which disorders in the central nervous system can be studied. Through non-invasive fundus photography, a range of changes in the blood vessels and nerves of the retina that are associated with Alzheimer’s disease can be detected.
The team developed and validated their AI model using nearly 13,000 fundus photographs from 648 Alzheimer’s disease patients (including patients from the Prince of Wales Hospital) and 3,240 cognitively normal subjects. Upon validation, the model showed 84% accuracy, 93% sensitivity and 82% specificity in detecting Alzheimer’s disease. In the multi-ethnic, multi-country datasets, the AI model achieved accuracies ranging from 80% to 92%.
Accessibility, non-invasiveness and high cost-effectiveness of the AI model using fundus photography help the detection of Alzheimer’s cases both in the clinic and the community
A Professor of Medicine and Director of the Therese Pei Fong Chow Research Centre for Prevention of Dementia at CU Medicine stated that in addition to its accessibility and non-invasiveness, the accuracy of the new AI model is comparable to imaging tests such as magnetic resonance imaging (MRI).
It shows the potential to become not only a diagnostic test in clinics but also a screening tool for Alzheimer’s disease in community settings. Looking ahead, the team aims to validate its efficacy in identifying high-risk cases of the disease hidden in the community, so that various preventive treatments such as anti-amyloid drugs can be initiated early to slow down cognitive decline and brain damage.
The Associate Professor in the Department of Ophthalmology and Visual Sciences at CU Medicine said that in addition to applying novel AI technologies in the model, the team also tested it in different scenarios. Notably, their AI model retained a robust ability to differentiate between subjects with and without Alzheimer’s disease, even in the presence of concomitant eye diseases like macular degeneration and glaucoma which are common in city-dwellers and the older population.
Their results further support the hypothesis that the team’s AI analysis of fundus photographs is an excellent tool for the detection of memory-depriving Alzheimer’s disease. To move this research towards clinical application, the team is developing an integrated, AI-based platform to combine information from both blood vessels and nerves of the retina captured by fundus photography and optical coherence tomography for the detection of Alzheimer’s disease. Their findings should provide more evidence to move AI from code to the real world.
The Ministry of Information and Communications (MIC) announced it would roll out Internet advertising management measures at a conference in Hanoi earlier this week. Participants at the event discussed how advertising in cyberspace has become the norm. Domestic and foreign firms choose it because it is easier to access customers and it offers flexible costs and larger reach. However, the limited management of ads poses potential risks to the safety of brands, the Ministry has said.
According to a press release by MIC, ad agents affirmed that without the cooperation of cross-border platforms in modifying algorithms to filter and censor content, ad violations will remain rampant. The Ministry will penalise agents and brands that cooperate with platforms that do not fall in line with MIC regulations. On the other hand, the Ministry will support ads on domestic and foreign digital platforms that comply with domestic laws, MIC’s Deputy Minister, Nguyen Thanh Lam, noted. This will protect brands and build a healthy, safe, and fair ad business environment.
The Ministry will also increase inspection and clampdown on violations of Internet ads activities, he said. Cross-border ad firms that fail to comply with Vietnam’s laws will not be allowed to operate in the country. MIC has also generated a Whitelist consisting of licensed e-newspapers, magazines, general information websites, and social media. Other websites, registered accounts, and information channels are also in the pipeline for the list, the release said. The list will be publicised on the portals of the Ministry and Authority of Broadcasting and Electronic Information. Ad service providers, agents, and brands were also urged to use the list for their work.
Nearly 80% of the population in Vietnam are digital consumers, as OpenGov Asia reported earlier in October. Over the past year, the average contribution of e-commerce to total retail has continued to grow at 15%. Higher than growth in India (10%) and China (4%), with an online-to-total retail share of 6%. Now that the world is in the post-pandemic stage, regional consumers are prioritising an integrated shopping experience, combining online and in-person services. During the ‘discovery’ phase of their shopping, 84% of Vietnamese shoppers use the Internet to browse and find items. This is a period when they use more platforms than ever before, with the dominance of the e-commerce market accounting for 51% of online spending.
At the same time, social networking sites account for nearly half of online discoveries, including images (16%), social media videos (22%), and related tools such as messaging (9%). These tools were paramount channels for 44% of survey respondents. Consumers’ openness to interaction and experimentation has also led to behavioural changes, with 64% of respondents saying they have interacted with a business account in the past year. As customers seek more engagement, the content creation economy is able to grow exponentially.
In the context of digital consumption, Vietnamese users switch brands more often and increase the number of platforms they use to find a better value, with 22% of online orders made on various e-commerce platforms. The number of online platforms Vietnamese consumers use has doubled from 8 in 2021 to 16 in 2022. Therefore, it is important to put in place proper ad regulations as Internet usage grows.
The Indonesian government disclosed four potential uses of Big Data and AI to improve its e-government programmes. These two technologies, they feel, have the potential to support disaster identification and preventive action, prevention of illegal activities and cyber-attacks and increase workforce effectiveness.
The Director General of Informatics Applications, Semuel A. Pangerapan, explained several scenarios for Big Data. According to him, the government can use Big Data to improve critical event management and the quality of the response by identifying problem points through Big Data Analytics. For example, the agencies can be better prepared to prevent and mitigate natural disasters such as drought, epidemics or massive accidents occur.
In addition, Big Data can also enhance the government’s ability to prevent money laundering and fraud through better surveillance to detect such illegal activities.
Furthermore, Big Data significantly reduces the possibility of cyber-attacks. Cyber-attacks can come from external parties, data leaks or internally for a variety of reasons. An analysis of patterns and unusual activities can help in preventing or managing such cyber issues.
Big Data and analytics can contribute to workforce effectiveness by increasing monitoring. In addition, it can be used for policy design, decision-making and gaining insights.
Semuel stressed the importance of data analysis after collecting all data in the right fashion. Data is only valuable if it is collected correctly and then analysed – data will only provide benefits if processed in the right way. “In its implementation, AI helps analyse existing Big Data, providing data understanding or insight to help make decisions,” he explained.
Another advantage of AI is the ability to speed up new implementation services and corrections in real-time. At the evaluation stage, AI can also provide suggestions for adjustments and improvements to subsequent policies.
Currently, the encourages the improvement of the quality of Big Data and AI innovation through the development of e-government. The Indonesian government is also open to third parties to accelerate Big Data and AI use.
E-government has made progress in recent years and received appreciation from the United Nations in 2020. The UN said that Indonesia’s e-government development index rose to rank 88 from previously ranked 107 in 2018. Indonesia’s e-participation index has also increased from rank 92 in 2018 to 57 in 2022.
“The two rankings show an increase in the quality of Indonesia’s e-government and the level of community activity in using e-government services,” said Semuel.
However, the government faced challenges in implementing these two technologies. Overlapping and data replication is one of the main problems. “Regulatory obstacles in the procurement of government Big Data infrastructure also need to be overcome. Then compliance with international standards for the national Big Data ecosystem is also still the government’s homework.”
To optimise AI use, Semuel emphasised the need for a skilled workforce, regulations governing the ethics of using AI, infrastructure, and industrial and public sector adoption of AI innovations.
The government is implementing several solutions to overcome challenges. First, they have provided suitable facilities in the form of National Data Centres (NDCs) in four separate locations. The NDCs will accommodate Government Cloud and contain national data across sectors.
Optimisation of data centre utilisation needs to be supported by staff with qualified expertise. For this reason, the government is holding digital skills training on AI and Big Data through the Digital Talent Scholarship (DTS) and Digital Leadership Academy (DLA) programs.
Apart from facilities and upskilling, Indonesia is looking to develop a business ecosystem that utilises AI and Big Data. Support for this comes from the National Movement of 1000 Digital Startups, Startup Studio Indonesia (SSI) and HUB.ID.
Thailand’s Electronic Transactions Development Agency (ETDA) has recently launched the AI Governance Clinic (AIGC) which will serve as a source of Thai and overseas knowledge and expertise on governance related to artificial intelligence (AI) and its adoption.
ETDA is joining forces with the nation’s National Electronics and Computer Technology Centre (NECTEC), the Ministry of Public Health’s Department of Medical Services, and the Department of Health Service Support. A memorandum of understanding (MoU) between ETDA and the three partners was signed during the nation’s “Building Trust and Partnership in AI Governance” event.
AI is currently having a significant impact on almost every aspect of people’s lives, including work, business, education, finance, health, and electronic transactions, according to ETDA Executive Director Dr Chaichana Mitrpant. “These issues all involve the application of AI.”
A six-year national AI implementation plan for national development between 2022 and 2027 was recently approved by the Cabinet. The adoption of AI with governance along with pertinent laws and regulations is one strategy outlined in the plan for ensuring that users understand social responsibility.
Thailand is getting ready to adopt AI, another cutting-edge technology that is gaining popularity and relevance. ETDA is an organisation that supports a secure and reliable ecosystem for electronic transactions.
To achieve the objectives outlined in the implementation plan, the agency is collaborating with NECTEC. A study on Thailand’s AI standard landscape to develop AI adoption measures and a study on measures to assess AI-based computer programmes to increase the capacity of Thai entrepreneurs in all industries in accordance with international standards are among their important joint projects.
To create a framework for AI governance regarding electronic transactions that are in line with Thailand’s context and international standards, ETDA and its partners – both in Thailand and abroad – established the Clinic.
The Clinic is collaborating with the Academy of Digital Transformation by ETDA to provide resources for capacity development at all levels. Additionally, the AIGC has a substantial library of knowledge sources on pertinent topics, as well as experts from numerous nations who are prepared to provide guidance on AI policies and governance.
An additional Memorandum of Understanding (MoU) was signed by ETDA and its partners NECTEC, the Department of Medical Services, and the Department of Health Service Support for the joint development of an AI governance framework that is appropriate for the Thai context for the country’s healthcare industry.
The collaboration aims to advance the sharing of innovation and AI technology knowledge among the participating agencies and to inform pertinent agencies about AI governance. Thailand’s AI strategy was inspired by a desire to boost the nation’s economy and the quality of life for its people as well as a competitive spirit.
Thailand strives to develop the human capacity and skills required for an AI ecosystem despite the difficulties it faces in developing AI capabilities. They created a formal network and consortium as a result. Thailand will train future AI professionals through structured academic programmes in Thai universities, in addition to bridging the gap between existing academic and industrial experts.
ETDA is the primary agency responsible for developing, promoting, and supporting electronic transactions and it is part of the Ministry of Information and Communication Technology. Its primary responsibility is to research, study, and support the operation of the Electronic Transaction Committee and other related agencies, hence, it contributes to the development and promotion of Thailand’s electronic transactions.
Researchers at the University of South Australia are trialling a simple finger prick technology that could soon be all it takes to save the lives of pregnant mothers and their babies who are at risk of a dangerous pregnancy complication known as preeclampsia.
Preeclampsia affects four per cent of all pregnancies in Australia and can cause organ failure, blood clotting, restricted foetal growth and be life-threatening for the mum and baby. However, current diagnosis methods are complex and can take up to 24 hours in rural areas – time that is critical when dealing with the health of an unborn baby.
In a move which could revolutionise the diagnosis and care of the condition, scientists from UniSA’s Future Industries Institute have developed new technology which requires only a few drops of blood to test for preeclampsia – and the result returned within 30 minutes. This means the test can be done quickly and accurately in a rural setting by a primary healthcare team, without the need to send it to an advanced laboratory.
The Hospital Research Foundation Group is now funding the testing stage of their device, in the hope that earlier and more accurate diagnosis can improve prenatal care and save lives.
Chief investigators Dr Duy Phu Tran and Professor Benjamin Thierry said the device would be most critical in regional settings where emergency care is limited, with preeclampsia one of the main reasons for emergency retrievals by the Royal Flying Doctors Service.
The technology is designed to enable rapid and accurate point-of-care testing for preeclampsia biomarkers, allowing for quicker interventions and likely improved pregnancy outcomes for women living in rural Australia.
The current tests for preeclampsia in primary care involve a combination of blood pressure measurements, urinalysis and/or biochemical and haematological testing. Blood biomarkers have recently been identified but testing can only be carried out in large laboratories, for example, at the Women’s & Children’s Hospital.
Concurrently, preeclampsia can progress very quickly – in some cases hours – and have catastrophic consequences. We hope our device can bridge this gap in care for rural women.
An AU$132,000 grant from The Hospital Research Foundation Group will help accelerate the testing and commercialisation of the device through a trial of at-risk pregnant women in hospitals. If validated, the trial will then extend to mothers seen by the Royal Flying Doctors Service retrieval team, with the hope to then expand even wider with more funding. The 3D-printed device is being manufactured locally by the South Australian node of the Australian National Fabrication Facility at UniSA’s Mawson Lakes campus.
The Director of the National Emergency Response and the Public Health Research Unit at the Royal Flying Doctors Service said if the trial was successful, the technology would have a significant impact on prenatal care in regional areas. He noted that preeclampsia is a substantial risk during pregnancy, which is exacerbated in remote communities, where diagnosis and subsequent treatment can take significantly longer than in major city areas.
The hospital conducts roughly 750 retrievals associated with pregnancy per year, with pre-term labour, premature rupture of membranes and preeclampsia being some of the leading transfer reasons.
The Royal Flying Doctor Service has seen, first-hand, the impacts of poorly managed preeclampsia and this technology is an exciting and much-needed step forward in improving rural and remote patient outcomes and in closing the gap, he added.
Meanwhile, the CEO of The Hospital Research Foundation Group, said the organisation was pleased to be advancing testing of the device. He noted that women’s health and bridging the gap between country and city care are important healthcare needs. The revolutionary technology, if validated, will be an exciting development to give all mums and bubs the very best start in life, he added.
The Cyberspace Administration of China (CAC) announced a new certification for personal information protection and implementation. The office has decided to implement such certification to enhance its information protection capabilities and to promote the rational processing of personal information.
The certification implementation follows the Personal Information Protection Certification Implementation Rules. The implementation rules clarify that personal information processors must comply with the requirements of GB/T 35273 Information Security Technology Personal Information Security Specifications. The rules outline requirements for on-site audits, the evaluation and approval of certification results, post-certification supervision and certification time limits.
Organisations engaged in personal information protection certification work need approvals to carry out activities. The regulation applies to every personal information processor that carries out private information collection, storage, use, processing, transmission, provision, disclosure, deletion and cross-border processing activities.
The State Administration for Market Regulation and the State Internet Information Office decided to implement personal Information protection certification. The step is relevant to provisions of the Personal Information Protection Law of the People’s Republic of China (‘PIPL’). The body requires the Specifications for Security Certification of Cross-Border Processing of Personal Information for cross-border personal information processing.
The latest versions of the standards include technical verification, on-site audit, and post-certification supervision. In addition, the certification body shall clarify the requirements for certification entrustment materials, including but not limited to the basic materials of the certification client, the certification power of attorney, and relevant certification documents.
To get certified, an organisation must submit certification entrustment materials according to the certification body’s requirements and the certification body shall give timely feedback on whether it is accepted after reviewing the materials.
The materials are then used for determining the certification plan, including the type and quantity of personal information, the scope of personal information processing activities, information on technical verification institutions, etc., before notifying the organisation seeking certification.
The CAC stated certification is valid for three years. An organisation must submit a certification commission within six months before the expiration of the validity period. The certification body shall adopt the method of post-certification supervision and reissue new certificates to those that meet the certification requirements.
Violations, cheating, and other behaviours that seriously affect the implementation of the certification on the certification client or personal information processor will cancel the certificate. Therefore, certification bodies shall adopt appropriate methods to implement post-certification supervision to ensure that certified personal information processors continue to meet certification requirements. The certification body comprehensively evaluates the post-certification surveillance conclusions and other relevant information. If the evaluation is passed, the certification certificate can continue to be maintained.
The organisation shall actively cooperate with the certification activities. During the validity period of the certification certificate. If the name and registered address of the certified personal information processor, or the certification requirements, certification scope, etc., change, the certification principal shall submit a change entrustment to the certification body.
When changes happen, the certification body must evaluate the change in entrustment materials. The result will determine whether the body can approve the change. If technical verification or on-site audit is required, the body shall conduct technical and on-site audits before the change is approved.
When a certified personal information processor no longer meets the certification requirements, the certification body will promptly suspend or revoke the certification certificate. The certification principal can apply for the suspension and cancellation of the certification certificate within the validity period of the certification certificate.