Page Not Found
When individuals engage in social interactions with others, they encounter a range of emotions. Additionally, they make conscious efforts to either evade or predict these emotional responses based on the words spoken or actions taken. Referred to as the theory of mind, this ability empowers people to deduce the thoughts, wishes, objectives and feelings of those around them.
A computational model which enables forecasting of a range of emotions in individuals was developed by MIT neuroscientists, including joy, gratitude, confusion, regret, and embarrassment. This model closely mimics the social intelligence exhibited by human observers.
It was specifically designed to anticipate the emotional responses of individuals involved in a scenario based on the prisoner’s dilemma. It is a classic game theory scenario in which two people must decide whether to help and cooperate with their partner or betray them.
The construction of the model involved integrating various factors that are believed to impact an individual’s emotional responses. These factors encompassed the person’s desires, expectations in each situation, and whether their actions were being observed. By considering these elements, the researchers aimed to create a comprehensive framework that could capture the complexities of human emotional reactions.
By incorporating these factors, the computational model developed by the researchers aimed to approximate how individuals might express emotions in different contexts. This computational modelling advancement brings humanity closer to unravelling the mysteries of human emotions and enhances the understanding of how individuals perceive and respond to various situations.
Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the study’s Senior Author stated that although comprehensive research has focused on training computer models to infer an individual’s emotional state through facial expressions, it is not the most crucial element of human emotional intelligence. The most critical factor is the capability to anticipate and predict someone’s emotional reaction to events before they occur. This ability holds greater significance in human emotional intelligence.
To simulate the prediction-making process of human observers, the researchers utilised scenarios taken from a British game show named “Golden Balls.” Depending on the game’s outcome, contestants may experience various emotional states, such as joy and relief when both contestants choose to share the winnings, surprise and anger if one contestant steals the pot, or a mix of guilt and excitement when successfully stealing the winnings.
The researchers devised three distinct modules to develop a computational model capable of predicting these emotions. The first module was trained to infer a person’s preferences and beliefs by analysing their actions, employing a technique known as inverse planning.
The second module assesses the game’s outcome with each player’s desired and anticipated outcomes. Subsequently, the third module utilises this information along with the contestants’ expectations to forecast the emotions they might be experiencing.
After implementing and activating the three modules, the researchers employed them on a new dataset obtained from the game show to evaluate the accuracy of the models’ emotion predictions compared to those made by human observers. The results demonstrated a significant improvement in the model’s performance compared to any previous model designed for emotion prediction.
In the future, the researchers are ready to enhance the model’s capabilities by further extending its predictive performance to various scenarios.
The prospective economic, social, and technological benefits of transforming Singapore into an open and trustworthy global artificial intelligence (AI) hub are substantial. It can place the nation at the vanguard of AI innovation and enable it to shape the future of this transformative technology.
The Ministry of Communications and Information (MCI) and a major technology firm announced their intention to work together to strengthen Singapore’s AI national vision and strategy. This strategic partnership may support the adoption and development of innovative, responsible, and inclusive AI technologies to maximise opportunities arising in Singapore and the region.
Director of the Digital Economy Office at MCI, Andrea Phua, stated that they welcome the opportunity to collaborate with the tech giant as they develop their plans to support the growth of the digital economy and realise the benefits that AI brings to individuals and businesses in a safe and responsible manner.
Singapore’s technology ecosystem has access to next-generation AI infrastructure, industry-leading GPU hardware, the Vertex AI platform, and AI-managed services and tools to implement AI at scale.
The partnership will seek to::
- Accelerate the development of home-grown AI technologies: A marketplace for developers and businesses to access the best of AI solutions and foundation models, allowing them to build conversational AI, enterprise search, and other capabilities;
- Build a sustainable pipeline of talent for the future AI economy: Skill-building initiatives to strengthen AI capabilities and competencies, including possible assistance for eligible startups to leverage an open AI ecosystem;
- Supercharge the adoption of cloud AI technologies in Singapore: Development of incubators and accelerators that encourage developers, entrepreneurs, and companies to innovate with generative AI (Gen AI) technologies; and
- Root Singapore’s AI progress in Responsible AI: Possible collaboration in AI governance and Responsible AI principles implementation.
By becoming a global AI centre, Singapore can attract world-class talent, researchers, and businesses. This promotes collaboration and the exchange of knowledge, resulting in innovation and the creation of cutting-edge AI technologies.
Several industries, including healthcare, finance, transportation, and manufacturing, will be transformed by AI. By positioning itself as a global AI hub, Singapore can attract investments, foster local startups, and generate high-paying employment, thereby fostering economic growth and prosperity.
Singapore has the potential to become a centre for AI education and talent development. By providing high-quality training programmes, seminars, and research opportunities, the nation can produce a workforce with AI expertise. This can satisfy the increasing demand for AI professionals and alleviate the talent shortage in this field.
Singapore, as a global AI centre, can serve as a testing ground for AI-based solutions and applications. The nation’s well-developed infrastructure, supportive regulatory environment, and diverse population make it an ideal location for the deployment and development of AI technologies. This enables businesses to validate their products, gain real-world insights, and iterate their solutions.
Through initiatives such as the Model AI Governance Framework, Singapore has demonstrated a commitment to ethics and trust in AI. Singapore can influence and define international standards for responsible AI development and deployment if it continues to develop as a global AI hub. This contributes to the development of AI technologies that respect privacy, impartiality, and transparency.
Singapore, as an open and trusted global AI centre, has the potential to become a regional leader in AI. This can entice regional enterprises and organisations to cooperate with Singaporean partners, resulting in a thriving Southeast Asian AI ecosystem. Singapore’s AI leadership may also assist drive regional initiatives, boost information sharing, and improve the region’s overall capabilities.
Machine-learning models are utilised in the real world to assist radiologists in identifying potential diseases in X-rays; however, these models are intricate and their prediction process remains elusive even to their creators. To address this, researchers employ saliency methods, techniques that seek to offer insights into the model’s behaviour and elucidate its decision-making procedure.
Researchers from the Massachusetts Institute of Technology (MIT) and a multinational technology company have collaboratively developed a tool with a new method to assist users in selecting the most suitable saliency method for their specific requirements. Therefore, they introduced saliency cards, providing standardised documentation summarising how a particular process of saliency operates, including its strengths, weaknesses, and explanations to aid users in correctly interpreting the method’s outputs.
The Co-lead Author, Angie Boggust, a graduate student in electrical engineering and computer science at MIT and a member of the Visualization Group of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), expresses the team’s aspiration that users equipped with this knowledge will be able to consciously select a suitable saliency method based on the specific machine-learning model being employed and the task it aims to accomplish.
Boggust explains that saliency cards are purposefully crafted to provide a concise and easily understandable overview of a saliency method while highlighting the essential attributes most relevant to human users. These cards are intended to be accessible to a wide range of individuals, including machine-learning researchers and even those unfamiliar with the field and seeking guidance in selecting a saliency method for the first time.
Choosing the “wrong” saliency method can have serious consequences. For instance, one saliency method known as integrated gradients compares the importance of features in an image to a meaningless reference point. Features with the highest priority compared to this reference point are considered the most meaningful for the model’s prediction. If an unsuitable saliency method is chosen, it can lead to incorrect or misleading interpretations of the model’s behaviour and predictions. Therefore, selecting a saliency method appropriate for the specific task requirements is crucial to avoid these consequences.
Saliency cards can assist users in avoiding choosing “the wrong method” by reducing the operational details of a saliency method into ten user-centric attributes. The attributes encompass the methodology for calculating saliency, the connection between the saliency method and the model, and how users interpret the outputs generated by the method.
The saliency cards can also serve as a valuable resource for scientists by revealing areas where further research is needed. For instance, the researchers from MIT encountered a challenge in finding a saliency method that was both computationally efficient and applicable to any machine-learning model. This highlights a gap in the research space that warrants further exploration and development.
In the future, the researchers aim to delve into the less-explored attributes of saliency methods and potentially create task-specific saliency techniques. They also seek to enhance their understanding of how individuals perceive saliency method outputs, with the potential for developing improved visualisations. Furthermore, they have made their work accessible through a public repository, inviting feedback from others that will contribute to future advancements.
Boggust is optimistic, envisioning these saliency cards as dynamic documents that will evolve as new saliency methods and evaluations emerge. Ultimately, this marks just the beginning of a broader discussion regarding the attributes of saliency methods and their relevance to different tasks. Boggust believes that in the future, there will be other researchers who will further develop this discovery.
The Smart Nation and Digital Government Office (SNDGO) and a major cloud computing company have announced the launch of the Artificial Intelligence Government Cloud Cluster (AGCC), a comprehensive platform designed to accelerate AI adoption in Singapore’s public sector, advance local applied AI research efforts and support the growth of the local AI startup ecosystem.
The AGCC has been implemented by SNDGO and the cloud tech company for usage by Singapore’s government agencies and the research, innovation, and enterprise (RIE) ecosystem. The AGCC is hosted in Singapore in a specialised cloud computing environment.
Agencies can use the AGCC to build and deploy scalable and impactful AI applications rapidly, safely, ethically, and cost-effectively by leveraging an AI technology stack and a vast partner ecosystem of software-as-a-service firms, consultancies, and AI startups. AI technology stack capabilities include:
First, an AI-optimised infrastructure. High-performance A2 supercomputers powered by NVIDIA’s A100 GPUs and hosted in an open, scalable, secure, and energy-efficient infrastructure. This enables cloud developers to train computationally complex AI models at fast speeds while minimising costs and environmental impact.
Customisable first-party, third-party, and open-source AI models follow. A central repository enabling AI practitioners to access pre-trained generative AI models, with built-in features to assist users in customising these models for specific requirements.
The repository contains a wide range of first-party, third-party, and open-source models designed for certain needs. These include models for summarising and translating text in different languages, sustaining an ongoing discussion, converting audio to text, producing, and modifying software code, and generating and repairing written descriptions.
International AI businesses interested in making their foundation models available to Singapore government departments can collaborate with the Cloud computing company to store these models in the repository.
Another category is no-code AI development tools. A Generative AI App Builder enabling developers (especially those with limited technical expertise) to swiftly construct and seamlessly embed chatbots and enterprise search experiences driven by Cloud’s generative AI models.
Finally, there are explainable AI and data governance toolkits. A set of built-in technologies that can assist government agencies in using AI in a secure and responsible manner. This includes features for access control and content moderation, as well as novel mechanisms for incorporating human feedback to improve model performance and the ability to audit the sources of AI model outputs to detect and resolve potential bias and ensure that model behaviour is compliant with regulations.
The Government Technology Agency (GovTech) is Singapore’s first public-sector organisation to use the AGCC. Its Open Government Products (OGP) team has integrated with Vertex AI and is investigating the use of its models in Pair, which are large language model-powered assistants that civil servants can use to help them boost productivity while maintaining the confidentiality of government information.
To help government agencies deploy AI applications as effectively and responsibly as possible, the Cloud tech company will collaborate with GovTech to design and run whole-of-government Digital Academy programmes that will assist agencies in developing in-house data science and AI expertise, developing AI innovation strategies, and implementing data governance best practices.
The programmes will be delivered in a variety of specialised formats to 150,000 public servants from 16 ministries and over 50 statutory boards.
Government agencies in Singapore will be able to use the AGCC and other authorised services through the Government on Commercial Cloud (GCC) 2.0 platform beginning in June 2023. The GCC platform, developed by GovTech, offers agencies a standardised and regulated means to implement commercial cloud solutions.
GCC 2.0, the platform’s second generation, is integrated with cloud-native capabilities and cloud security practices, enabling agencies to access into a larger ecosystem of services and people to accelerate the development of new digital applications.
State-run Osmania University in Hyderabad has embraced artificial intelligence and machine learning (AI/ML) to revolutionise its attendance marking process. Now, the tedious waiting to sign attendance registers or use biometric systems is no longer required. Employees simply need to enter the building, and their attendance will be automatically recorded. This is made possible through CCTV cameras equipped with AI and ML technologies, which accurately mark employees’ attendance, log-in and log-out times, as well as their entry and exit from the building.
The functionality of these cameras relies on an integrated facial recognition system. Leveraging cognitive AI capabilities, they identify facial biometrics and synchronise them with the current database to accurately document employee attendance. This innovative solution has been implemented as a pilot project in the main administrative building of the university for its employees. In the future, the university plans to expand the deployment of these cameras to other campus facilities, including offices, classrooms, and hostels.
According to an official from the university, under the manual system, employees often mark their attendance in the morning, leave the office, and return later to record their log-out time. However, with the implementation of CCTV camera-based attendance, this process will undergo a significant change. The cameras continuously capture movement and simultaneously store employees’ log details, eliminating the need for manual recording. To achieve this, AI and ML technologies have been integrated into two cameras.
Additionally, the official noted that the CCTV cameras also capture the log-in information of visitors entering the administrative building. This means that the log details of all visitors are systematically recorded in the database, providing a comprehensive attendance tracking system.
The introduction of CCTV-based attendance eliminates the need for manual attendance registers or time-consuming biometric systems, streamlining the entire process. Students simply need to enter the designated areas covered by CCTV cameras, and their attendance is promptly recorded. This not only saves time but also reduces the chances of errors or misinterpretations.
Moreover, these AI-powered cameras not only capture the presence of students but also provide additional functionalities. Some universities have integrated facial recognition systems to ensure authentication and prevent proxy attendance. The cameras analyse facial biometrics, matching them against the existing database to ensure accurate identification. It can also enable the university to track attendance patterns, identify areas that require improvement, and take proactive measures to enhance student engagement and performance.
Most educational institutions across the country are embracing the advancements brought by AI. Numerous schools and colleges have incorporated AI-based learning techniques to simplify the process of education and effectively teach intricate subjects to students. Additionally, AI’s adaptable learning methods assist teachers in providing personalised attention to each student.
Last year, the Indian Institute of Technology Madras (IIT-Madras) and the Tamil Nadu State Department of School Education announced they would collaborate to improve and update the digital learning platform for school students to an assessment-focused Learning Management System. It was deployed in high-tech labs in 6,000 government schools, as OpenGov Asia reported. It aimed to improve the quality of learning for around nine million students.
Education in Tamil Nadu’s schools was previously supplemented through a digital learning platform called the Education Management Information System. Researchers from the Indian Institute of Technology Madras used their AI and data science expertise to come up with ways to improve the way assessments are conducted and develop a framework to disseminate educational material.
The University of Sydney recently entered into a memorandum of understanding (MoU) with the Australian subsidiary of a pharmaceutical company based in South Korea. The partnership aims to leverage the power of artificial intelligence in identifying potential compounds for accelerated development into treatments for cancers and rare diseases.
Under the MoU, the University’s Drug Discovery Initiative will gain access to the pharmaceutical company’s advanced AI drug development platform, known as Chemiverse. This collaboration will enable the University to harness the capabilities of AI in identifying promising compounds for drug development. Additionally, the company will benefit from collaborating with the University’s esteemed team of researchers and using their cutting-edge drug discovery infrastructure.
The Director of the Drug Discovery Initiative expressed enthusiasm about the collaboration with the company. He highlighted the complexity involved in developing drugs for treating diseases and emphasised the significance of working with Pharos and their advanced artificial intelligence platform, Chemiverse.
The use of Chemiverse in this partnership is expected to greatly enhance the University’s capacity to develop innovative treatments for unmet medical needs. Moreover, the synergies between the platform and the Drug Discovery Initiative will foster innovation and facilitate the establishment of new drug discovery pipelines.
The Drug Discovery Initiative, situated within the School of Chemistry, serves as an interdisciplinary academic network that aims to expedite the early-stage development of drugs by leveraging top-tier individuals, technologies, and tools.
The Pro-Vice-Chancellor (Research Enterprise) emphasised the University’s dedication to translating fundamental research into practical solutions. The partnership with the company is viewed as an opportunity to capitalise on the expertise housed within the Drug Discovery Initiative. Together, they strive to advance the development of potentially life-saving targets for cancer and rare diseases.
The co-CEO of the company’s Australia branch expressed excitement about collaborating with the University and the Drug Discovery Initiative. He said the use of state-of-the-art infrastructure to accelerate drug discovery efforts.
The firm’s Chemiverse platform is a versatile tool that can be employed across the entire spectrum of new drug development, encompassing target discovery to lead compound generation. This advanced platform incorporates a vast amount of big data, approximately 230 million data points, and uses advanced algorithms to facilitate the drug development process.
The company is actively engaged in ongoing research and development as well as commercialisation efforts using the Chemiverse platform. They are currently working on approximately 10 pipeline projects, which include the development of a treatment called “PHI-101” for acute myeloid leukaemia. Notably, PHI-101 is currently undergoing phase 1b clinical trials.
On the other hand, the Drug Discovery Initiative plays a prominent role in the development of new compounds and the identification of collaborative pipelines. They are highly active in their pursuit of advancing drug discovery and forging partnerships in this field.
In March, the NSW Government provided funding for the establishment of the NSW Organoid Innovation Centre. This state-of-the-art facility is a collaborative initiative involving multiple institutions. It focuses on using cutting-edge stem-cell techniques to expedite the process of drug discovery and design.
The pharmaceutical company, earlier this year, became a part of the Sydney Knowledge Hub, which serves as a startup incubator and coworking space located at the University of Sydney. This strategic collaboration aims to foster partnerships and facilitate seamless collaboration between industry and the research community in Sydney.
With each passing day, technology continues to evolve and flourish in our society. Its rapid advancement encompasses and profoundly impacts numerous aspects and industries, compelling professionals to adopt artificial intelligence-driven (AI-driven) solutions to augment their productivity and efficiency. This technology has inevitably penetrated the realm of education, enabling teachers to utilise generative AI in assessing pupils’ work.
In light of its subjectivity and limited capabilities in assessing complex academic content, the Ministry of Education in New Zealand recommends that teachers carefully consider using generative AI technology to mark pupils’ work.
The ministry strives to uphold fair and accurate evaluation methods that align with educational standards and objectives, “Without understanding the basis for seeing inside the algorithm, this can be leading to discrimination and unfairness.”
Furthermore, there are instances where technology can be fallible due to the absence of human intervention (human touch). Generative AI systems trained solely on internet data may need more exposure to the specific work produced by children and young individuals, resulting in a limited understanding of what is suitable and expected from them, leading to a limited understanding of what is deemed appropriate and expected from this demographic.
Victoria University Senior Lecturer in software engineering, Simon McCallum, said that he agreed teachers should be wary of using generative AI for marking pupils’ work. However, McCallum believed generative AI tools would eventually be very good for grading students’ work.
Utilising generative AI, in the educational setting is recommended by employing it purposefully and judiciously. Ultimately, teachers can leverage its potential to teach students critical literacy skills, specifically empowering learners to question the accuracy of the information they encounter and to identify bias. With generative AI as a valuable resource, the educational experience becomes a dynamic and engaging journey of exploration and critical thinking.
Technological evolution can bring various advantages and disadvantages. However, humans cannot entirely rely on it, especially considering the technology has yet to reach its potential fully. Vaughan Couillault, president of the Secondary Principals Association, says, “There are many advantages to having machines automate certain tasks, but the quality is not yet at the desired level.”
As advancements in generative AI persist, its reach extends beyond its initial domains, finding applications in diverse fields and sectors. Several countries are now witnessing firsthand the transformative impact of generative AI on traditional business, even government policy models.
In New Zealand alone, there is a strong emphasis on promoting technology integration in education, with initiatives designed to support teachers adapting to technological advancements. One such initiative is the tech programme for teachers, which aims to equip educators with the necessary knowledge and skills to incorporate technology into their teaching practices effectively.
These initiatives aim to empower teachers to impart their newly acquired knowledge to students, especially those who are digitally inclined. By doing so, these programs foster a culture of technological fluency and inspire the next generation to embrace the digital world. One of the initiatives is the tech program for teachers, which aims to provide educators with the necessary knowledge and skills to integrate technology into their teaching practices effectively.
The curriculum emphasises the significance of fostering critical literacy, including digital literacy. Teachers can leverage generative AI by creating texts and incorporating them into lessons to develop students’ critical literacy abilities. Additionally, teachers can utilise a series of texts to enhance students’ understanding of the effective use of prompts.
Given the recent rapid development of artificial intelligence (AI), the Ministry of Digital Affairs (moda) announced that it has become an official partner of an international non-governmental organisation to ensure the alignment of AI applications with the interests of the public and to develop the necessary application services for society.
As a member of the “Alignment Assemblies” project, moda’s global and public objectives are to assist Taiwan in building public consensus regarding the needs and risks of AI and to collectively address the “Alignment Problem” of AI.
Beginning in July of this year, the moda intends to influence the direction of AI development through Ideathons, employing a citizen participation and deliberation model and using Taiwan as a test bed.
The moda emphasised that the international non-governmental organisation supports the technology that incorporates social development, industrial advancement, and public confidence. It believes that the development of AI should prioritise ethics and the public interest.
During the Democratic Summit in March of this year, the moda, led by Minister Audrey Tang, launched this initiative to create a global consensus among people and ensure the alignment of AI with human values. By participating in this initiative, the moda hopes to promote digital democracy and global partnerships while fostering a diverse and inclusive digital culture for the development of AI in Taiwan.
The moda announced that it will initiate the “Democratising AI Futures” dialogue through Ideathons and invite public participation in the third quarter of this year as well as organise seminars to discuss how to respond to the development of generative AI.
Minister Audrey explained that AI has brought about profound social changes and that issues such as algorithms, intellectual property, technological ethics, public services, and social impact have garnered significant attention, posing new challenges to democratic governance.
In response to the social concerns raised by the trend of generative AI, moda is actively drafting the “Basic Law on Artificial Intelligence.” The moda also expects that policymakers and technology developers will have access to vital information to ensure that the development of AI serves the public interest.
In the fast-changing technological world, fostering consensus on the requirements and hazards associated with AI is essential. As AI continues to evolve and permeate numerous elements, it is critical to ensure that its development and deployment are in line with the interests and values of society.
Building consensus allows for the identification and prioritisation of ethical considerations in AI development. It enables stakeholders to address possible issues such as bias, privacy concerns, and job displacement cooperatively, as well as cooperate towards developing AI systems that adhere to ethical norms and protect human rights.
Also, achieving consensus on AI enables policymakers to make educated decisions when developing legislation and policies. Policymakers may establish comprehensive frameworks that balance innovation, social demands, and possible risks connected with AI technology by considering the different perspectives and concerns of the public.
Building consensus aids in the establishment of public trust and acceptance of AI systems. When the public participates in AI debates and decision-making processes, people feel more empowered and are more inclined to trust and adopt AI applications that are consistent with their values and meet their requirements.
Consensus-building aids in resolving biases and guaranteeing fairness in AI algorithms and systems. Potential biases can be recognised and minimised by integrating a diverse variety of stakeholders, including marginalised populations and underrepresented groups, resulting in more equal opportunities in AI systems.