404 Error

Page Not Found

Sorry the page you were looking for cannot be found. Try searching for the best match or browse the links below:

Latest Articles

Researchers from the National University of Singapore (NUS) have created the HaptGlove, a lightweight, untethered haptic glove for virtual environments. It aims to provide a more realistic and authentic sense of touch and movement when interacting with virtual objects, enhancing the overall immersive experience in virtual reality (VR).

While the concept of haptic gloves is not new, current technologies have limitations in providing a realistic sense of touch, according to a press statement. Vibration motors in typical haptic gloves cannot replicate the real-world sense of touch like the hardness and shape of virtual objects. Other haptic gloves utilise pneumatic actuators that generate pressure but are bulky and restrict user movement.

The team’s research leader, Lim Chwee Teck, explained that virtual reality should not only be a visual and auditory experience but also enable interactions with virtual objects. However, current methods of interacting with virtual objects, such as pressing on a virtual panel or interacting with avatars, lack the sensation of touch found in the real world. This prompted the team to develop the haptic glove, which aims to provide the sensation of a “physical” touch in the virtual world.

HaptGlove is a portable and highly flexible haptic glove that enables users to have immersive touch and feel of VR objects with unparalleled realism in the VR experience. It incorporates lightweight pneumatic control and the team’s latest microfluidic sensing technology, which significantly reduces its size and weight, and eliminates the need for bulky accessories.

It enables users to interact with the virtual world in a more natural and realistic way, providing an unobtrusive and immersive experience in virtual reality. It features five pairs of haptic feedback modules, one for each finger, which are controlled wirelessly to sense the virtual object in terms of shape, size, and stiffness.

When using HaptGlove, users can sense contact as their avatar’s hand touches, grasps, and manipulates virtual objects by using a microfluidic pneumatic indenter to deliver real-time pressure to the user’s fingertips. The glove can also simulate the shape and stiffness of the object the avatar is touching, by restricting finger positions, adding realism to the virtual interaction experience.

HaptGlove uses proprietary software developed by the NUS research team to achieve a visual-haptic delay of fewer than 20 milliseconds. This is faster than conventional haptic gloves and provides a near-real-time user experience. The latest prototype is also more comfortable to wear, weighing only 250 grams, much lighter than commercially available haptic gloves that weigh over 450 grams.

The HaptGlove project was initiated by Lim and his team in 2019 and it took two years to develop a prototype. To evaluate the device’s performance, a group of 20 users was recruited to wear the glove to sort four virtual balls of varying stiffness in the virtual world. Apart from achieving over 90% accuracy in completing the tasks, the users said that HaptGlove significantly enhanced realism in VR and improved their overall experience, compared to devices using vibration motors.

Besides gaming, the HaptGlove could be used in applications in the fields of medicine and education, such as assisting surgeons to better prepare for an operation by simulating a hyper-realistic environment or giving students a hands-on learning experience by simulating palpation on different body parts.

A pop-up clinic utilising AI technology for identifying potential skin cancer has been established at the Tour Down Under event in Victor Harbor, marking it as the first of its kind in the world. The free service, delivered by nurses, uses algorithms in conjunction with doctors’ clinical expertise to detect skin cancer, which affects two out of every three Australians during their lifetime.

A collaboration between a national health charity, the University of South Australia, and The Hospital Research Foundation led to the creation of a new nurse-led model for skin cancer detection. Using AI technology, this model is being tested through pop-up clinics in regional South Australia, where skin cancer rates are significantly higher than in urban areas. This initiative aims to improve the rate of skin checks and early detection in these areas.

UniSA’s Professor in Cancer Nursing, Marion Eckert, says distance is a big disadvantage when it comes to skin screening services. She noted that skin cancer prevention efforts are lacking in funding and resources, particularly in rural areas, despite melanoma being a prevalent and deadly form of cancer in Australia, with four deaths daily. The situation is particularly dire outside of major cities.

Meanwhile, the CEO of the national health charity stated that the AI technology used in this clinic has been found to be on par with dermatologists, and in some cases, even outperforming them. However, further research and controlled trials are needed to fully confirm the efficacy of the algorithms.

He said that the charity’s goal is to halve the number of Australians who die from melanoma and increase the number of skin checks in Australia by 25% by running a targeted AI-supported national skin check program.

The project trains nurses to capture high-quality images of skin lesions, which are then assessed by AI algorithms for signs of cancer. The AI-generated diagnoses are reviewed by local GPs, and patients are referred to dermatologists for further evaluation if necessary. This approach allows for early diagnosis and treatment, especially in areas where access to dermatologists is limited.

Residents living in regional areas will be able to access the service via nurse-led free pop-up clinics at local community events such as the Tour Down Under.

The Australian Institute of Health and Welfare states that skin cancer results in a yearly cost of AU$ 400 million to the healthcare system. Additionally, 2 out of 3 Australians are likely to develop some form of skin cancer, with over 15,000 individuals (one every 30 minutes) being diagnosed with melanoma, the most dangerous type of skin cancer. Over 98 per cent of skin cancers can be successfully treated if they’re found early, which is why getting checked is so important.

The inaugural AI-powered skin cancer clinic was stationed at Warland Reserve from 19-21 January in collaboration with the Tour Down Under and Victor Harbor Art Show.

The global market for AI in cancer diagnostics was valued at US$93.2 million in 2021, and it is expected to experience a compound annual growth rate (CAGR) of 28% between 2022 and 2030.

The use of Artificial Intelligence (AI) in cancer diagnostics is expected to drive market growth as it enables early detection of cancer. According to the World Health Organization, cancer was responsible for 10 million deaths in 2020. Many of these deaths could have been prevented with early diagnosis.

The ability of AI to assist in the screening and diagnosis of cancer is expected to increase early detection rates and this is expected to drive the growth of the AI in the cancer diagnostics market.

MIT researchers developed a training curriculum for soldiers in the United States Air and Space Forces to deepen their understanding and use of artificial intelligence (AI) technologies. The project, recently presented at the IEEE Frontiers in Education Conference, aims to improve AI learning outcomes for individuals with distinct educational qualifications.

MIT scientists supervised a research study to examine the content, evaluate individual learners’ learning outcomes during the 18-month trial, and propose innovations and insights that would allow the programme to scale up.

Most educational programmes were delivered online, either synchronously or asynchronously. According to the findings of the peer-reviewed study, programme researchers discovered that this technique was effective and well-received by employees from various backgrounds and professional roles.

Hands-on learning was well received by military troops, who also valued asynchronous, time-efficient learning experiences that fit into their hectic schedules. They also preferred a team-based, learning-by-doing experience, although they desired material that incorporated more professional and soft skills.

“We are digging further into broadening what we think the learning potential is, guided by our questionnaire but also from understanding the process of learning about this kind of scale and complexity of the work. But ultimately, we want to provide genuine translational value to the Air Force and the Department of Defence. This study has a real-world impact for them,” said Principal Investigator, Cynthia Breazeal, the Dean of Digital Learning at MIT, the Director of MIT RAISE (Responsible AI for Social Empowerment and Education) and the Head of the Media Lab’s Personal Robots research group.

The researchers recorded the educational backgrounds and occupational responsibilities of six groups of Air Force members from the set of profiles. The team then developed three archetypes that were used to produce “learning journeys,” a series of training programmes aimed to teach each profile a set of AI skills. The three general military personnel are leaders, developers and users. Depending on the users’ needs, each learning journey contains various information and styles.

Following the session, researchers conducted interviews and questionnaires to assess the course material, including the topic, pedagogies, and technology used. Learners and staff from the programme provided input on how 230 Air and Space Forces soldiers participated in the course. They also worked with MIT academics to conduct a content gap analysis and recommend ways to improve the curriculum to address the needed skills, knowledge, and mindsets.

The researchers discovered that students desired more opportunities to interact with their peers through team-based activities. They are also more likely to communicate with teachers and AI specialists via online courses’ synchronous components. While most soldiers considered the content fascinating, they wished to see more instances directly pertinent to their day-to-day job and the Air and Space Forces’ broader mission.

Based on these findings, the team is improving the educational content and adding new technical features to the portal for the following research iteration. In addition, they are developing knowledge checks to be included in self-paced, asynchronous courses to assist learners in engaging with the content.

They are also adding new tools to support live Q&A events with AI experts and help build more community among learners. The team is also looking to add specific Department of Defence examples throughout the educational modules and include a scenario-based workshop.

The Department of the Air Force–MIT Artificial Intelligence Accelerator funded the project. Currently, the study is still underway and will extend through 2023. As the survey progresses, the programme team focuses on enabling this programme to reach a larger scale.

Major John Radovan, Deputy Director of the DAF-MIT AI Accelerator, expected the training could upskill at the scale of 680,000 workforces across diverse work roles, all echelons. Because as the largest employer in the world, The U.S. Department of Defence need to make sure its employees are all speaking the same language.

Stefanie Jegelka, an associate professor in MIT’s Department of Electrical Engineering and Computer Science, investigated the prospect of unravelling the “black boxes” enigma left by the deep learning process. The so-called black box nomenclature developed because of the complexity of deep learning nodes, and even the scientists who built them do not understand everything underneath the shell.

Researchers still need to learn everything that happens inside a deep-learning model or how it might impact how a model learns and behaves. But Jegelka is excited to continue researching these things since she isn’t happy with the “black box” prompt.

“With machine learning, you can achieve much, but only if you have the correct model and data. So building an understanding relevant to practice will help us design better models and help us understand what is going on inside them so we know when we can deploy a model and when we can’t. It is not a black-box device that you throw at data, and it works,” said the woman, who is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Institute for Data, Systems, and Society (IDSS) (IDSS).

Deep learning models frequently outperform people in the real world, such as detecting financial crime from credit card activity or identifying cancer in medical pictures. But what are these deep learning models learning?

These strong machine-learning models are often built on artificial neural networks with millions of nodes processing data to create predictions. She then delved into deep learning to understand what these models can learn, how they behave, and how to incorporate specific prior knowledge into these models.

Building an understanding that is relevant in deep learning practice, according to Jegelka, will help researchers develop a better model and comprehend what is happening. The more she studied machine learning, the more she was drawn to the challenges of understanding how models behave and how to manipulate this behaviour.

Graph models

Jegelka is specifically interested in optimising machine-learning models with graph input information. Graph data presents unique issues because it contains information about individual nodes and edges and the structure — what is related to what. Furthermore, graphs feature mathematical symmetries that must be honoured by the machine-learning model such that, for example, the same chart always yields the same forecast.

However, incorporating such symmetries into a machine-learning model is typically tricky. Take, for example, molecules. Molecules can be represented as graphs, with vertices representing atoms and edges representing chemical bonds between them. As a result, pharmaceutical companies may use deep learning to rapidly anticipate the properties of numerous compounds, reducing the number of molecules that must be physically tested in the lab.

Jegelka investigates approaches for developing mathematical machine-learning models that can effectively take graph data as input and produce something else, in this case, a prediction of the chemical properties of a molecule. This is especially difficult because the qualities of a molecule are influenced not only by the atoms within it but also by the connections between them. Traffic routing, chip design, and recommender systems are some applications of machine learning on graphs.

Deep learning consistency

What motivates Jegelka is her interest in the principles of machine learning, particularly the issue of robustness. Frequently, a model performs well on training data but degrades when deployed on slightly different data.

For instance, the model may have been trained on small molecular graphs or traffic networks, but the charts it encounters once deployed are much larger or more complex. Building past knowledge into a model can increase its reliability but recognising what information the model requires and how to incorporate it is more complicated.

She approaches this challenge by fusing her interest in algorithms and discrete mathematics with her enthusiasm for machine learning. She feels the model will be unable to learn everything due to various difficulties in computer science. However, how you build up the model determines what you can and cannot comprehend.

The United States has discussed and proposed deeper cooperation with India in areas like artificial intelligence (AI), cybersecurity, quantum, semiconductor, clean energy, advanced wireless, biotechnology, geosciences, astrophysics, and defence.

According to a press release, a high-level delegation of the premier National Science Foundation (NSF) met with the Indian Union Minister of Science and Technology, Jitendra Singh. During the delegation talks, Singh said that both sides have already identified the sectors and collaboration is underway in areas like healthcare, emerging technologies, space, and earth and ocean science. He stated that India and the United States have a long-standing connection and shared interest when it comes to scientific discovery and technological innovations.

The Head of the US delegation, Sethuraman Panchanathan, affirmed that the US would open new avenues of cooperation in areas like critical minerals, smart agriculture, bioeconomy and 6G technologies. He told the Indian Minister that more joint calls will be made in March regarding several select projects.

Singh suggested that the countries should explore avenues for the US and India to jointly identify, nurture, and promote deep-tech start-ups in areas of mutual interest. He also sought the support of NSF for the proposed Integrated Data System. At present, data collection is carried out by various institutions in different ways, but the Integrated Data System will go a long way in data analytics and associated benefits. The Minister said that the knowledge partnership with NSF-National Centre for Science and Engineering Statistics will add value in terms of long-term capacity development in this area.

The two sides will also scale up cooperation in the space sector and emerging areas like the management of space debris. The NASA-ISRO Synthetic Aperture Radar satellite is expected to be launched in 2023. Science and technology education partnership has been another dimension of the outreach, establishing linkages between American and Indian institutions and students.

Singh claimed that these are the best of times for both India and America to forge a durable and strong bond for global leadership in fighting global challenges. “There is a clear sign of willingness and optimism to achieve the desired goals,” the release quoted the Minister as saying. As the largest and oldest democracy in the world, Singh hoped the US would come to the aid of its natural ally when it comes to technology transfer in critical areas.

Last year, the two countries launched a Defence Artificial Intelligence Dialogue, under which they engaged in talks on AI in defence, space cooperation, and public health. As OpenGov Asia reported, officials from both sides forged new and deeper cooperation across the breadth of the US-India partnership, including defence, science and technology, trade, climate, and people-to-people ties. The leaders lauded the US-India Defence Technology and Trade Initiative (DTTI)’s ongoing projects, including a project agreement to co-develop air-launched UAVs. To foster robust private industry collaboration, the countries also discussed several DTTI projects, such as counter-unmanned aerial systems (UAS) and an intelligence, surveillance, target acquisition, and reconnaissance (ISTAR) platform.

They also signed a Space Situational Awareness arrangement, which laid the groundwork for more advanced cooperation in space. Bound by common strategic interests and an abiding commitment to the rules-based international order, they agreed to continue charting an ambitious course in the US-India partnership.

Jaime FlorCruz, the Philippine Ambassador to China, has indicated that some Chinese businesses are interested in bringing their knowledge and technology on AI (artificial intelligence) and data cloud computing to the country. He said that these corporations are eager to teach and upskill young Filipinos and provide them with the option of working with them or for other companies.

“This is merely the beginning; it does not represent the end of our trade or commercial relations with China. We can investigate and pursue further prospective projects,” Flor Cruz remarked on President Ferdinand R. Marcos Jr.’s three-day state visit to China.

President Marcos also stated how the collaboration might benefit the Philippines. However, he emphasises the need for technology transfer and profit repatriation.

“That’s what we’re looking for right now. To ensure that these investments not only create jobs for the local economy, but that there is also a transfer of technology so that profit repatriation is minimised, and value-added is retained in the Philippines,” he said.

President Marcos secured US$22.8 billion in investment pledges from Chinese investors. Thousands of employments are expected to be created due to investments and technology transfer to the country. He ensures that the investment will result in many jobs when they begin operations. Chinese firms will also begin training and capacity building for future employees.

The pledged investment of US$22.8 billion includes US$1.72 billion for agribusiness, US$13.76 billion for renewable energy, and US$7.32 billion for strategic monitoring of areas such as electric vehicles and mineral processing.

As per President Marcos, some of these investments have already begun construction and have started to open their offices. With relatively new fields such as mineral processing and battery and electric vehicle manufacture in the mix, Marcos stated that the government must continue demonstrating to potential investors that the Philippines is a good investment option.

At the same time, the Philippines signed a memorandum of understanding (MoU) on electronic commerce (e-commerce) during the state visit to promote trade relations. The two countries agreed to increase trade of high-quality featured products and services; explore business interchange between MSMEs and e-commerce platforms, start-ups, and logistics service providers; and share best practices and innovative experiences in utilising e-commerce.

The agreement will make it easier for the Philippines and China to share experiences, best practices, critical information, and trade and e-commerce policies. In addition, it established the Manila-Beijing Working Group on Electronic Commerce as a focal point of coordination for the two parties.

The countries want to launch activities to promote consumer and merchant protection, intellectual property protection, data security, and privacy rules. This MoU will help to increase the ability of local businesses in the Philippines to compete in the modernising business sector.

Furthermore, the Philippine House of Representatives adopted the final reading of the proposed Internet Transactions Act, which sought to establish an electronic commerce (e-commerce) bureau to regulate all business-to-business and business-to-consumer commercial transactions via the internet. The law and bureau will order Internet retail, online booking services, digital media providers, ride-hailing services, and internet banking services.

The Internet Transactions Act applies to all entities, both domestic and foreign. The bill also directs its activities to the Philippine market on purpose and voluntarily, which is judged to be doing business in the Philippines and subject to applicable Philippine laws. The proposed regulation also ensures parity and respects competition between online merchants and physical shop sellers of products and services.

The e-commerce bureau’s responsibility is to keep consumers and merchants who conduct internet transactions safe. The bureau oversees the development, monitoring, and maintaining an online business registry. In addition, the e-commerce bureau will spearhead the development of online dispute resolution tools that will serve as a central point of responsibility for consumers and online sellers looking for out-of-court dispute resolution.

In a bid to enhance the efficiency of production, promote administrative reform, and improve the investment environment, sectors and units in Ho Chi Minh City have increased the use of artificial intelligence in various applications.

Since August last year, the city has piloted AI technology in supervising and handling complaints and suggestions of the people in real-time, reports have said. According to the director of Police of Thu Duc city, his office is using 20 smart cameras with AI integration, which helped discover six law-breaking cases.

Further, the Director of the Municipal Department of Information and Communications, Lam Dinh Thang, said his agency will coordinate with relevant parties to apply AI technology to support the planning work and deal with overlaps in business inspection activities. AI technology can help officials compare and review documents and look up legal documents more easily.

The city will also pilot AI technology for the urban railway supervision system, predicting passengers’ demand for using the metro, setting up traffic forecasting models and analysing traffic behavior, and forecasting the possibility of disease transmission and epidemiological factors on GIS data sources.

The Vice Chairman of the municipal People’s Committee, Duong Anh Duc, noted that the city will focus on training human resources, investing more in infrastructure development and building coordination mechanisms between state management agencies with researchers and businesses, aiming to further promote AI research and development. More financial sources will be approved to foster AI applications in the fields that the city needs.

Earlier, OpenGov Asia reported that the use of artificial intelligence in Vietnam has significantly increased. Businesses and the government have deployed the technology in several applications to enhance governance and ease of living. For instance, citizens can use an app that turns voice into text, an app that supervises drivers, a camera that recognises people under protective masks and a multi-cognitive AI platform.

As per an industry report, by 2030, AI will contribute US$ 13 trillion more to the economy or 1.2% of the GDP of the globe. In Vietnam, the AI community has significant potential. The government has set policies to promote the development of AI applications, including the national strategy for researching, developing, and applying AI by 2030. Under the strategy, the country will step up AI R&D and transform the sector into an important technology field in Vietnam in the 4.0 industry revolution. The research and deployment will contribute to socio-economic development.

Vietnam ranks 26th in the world in AI research capacity, in accordance with an international ranking. There is not a big gap between AI research in Vietnam and in European countries such as Poland and Spain. Moreover, Vietnam ranked 6th out of 10 ASEAN member countries and 55th globally in the 2022 Government AI Readiness Index, up seven places compared to 2021. The country’s average score reached 53.96 (up from 51.82 in 2021), surpassing the global average of 44.61.

The Government AI Readiness Index assesses the AI readiness of governments from 181 countries in harnessing AI applications to operate and deliver their services. The index is used as a tool to compare the current state of government AI readiness in countries and regions across the globe to learn from useful experiences toward further development.

Researchers in the United States are promoting better understanding across cultures using artificial intelligence (AI) in a virtual reality (VR) game that addresses prejudice.

In the game, players were pushed to put themselves in another person’s perspective to understand their point of view. Recognising flaws and biases is critical to creating understanding across cultures.

The researchers use the programme to challenge participants’ prejudices, such as racism and xenophobia and potentially produce a more inclusive approach to others.

“On the Plane” is a VR RPG (role-playing game) that allows players to take on new roles in the first person that may be outside of their personal experiences, allowing them to combat in-group/out-group bias by adding new perspectives into their understanding of diverse cultures.

Players can take on the role of persons from diverse backgrounds while flying on an aeroplane, interacting in dialogue with others and responding to a series of prompts in-game. In turn, the outcomes of a tense conversation between the characters regarding cultural differences are controlled by the player’s choices. In one scenario, the game depicts xenophobia directed at a Malaysian American woman, but the technique can be extended.

Players can interact with one of three characters: Sarah, a first-generation Muslim American of Malaysian descent who wears a headscarf; Marianne, a white woman from the Midwest with minimal exposure to other cultures and customs; or a flight attendant. Sarah symbolises the out-group, Marianne is an in-group member, and the flight attendant is a bystander seeing an exchange between the two passengers.

“This project is part of our attempts to use virtual reality and artificial intelligence to solve social problems like bigotry and xenophobia,” Caglar Yildirim, an MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) research scientist and co-game designer on the project explained.

The interaction between the two passengers will provide players with first-hand knowledge of how one passenger’s xenophobia presents itself and how it impacts the other passenger. The simulation encourages critical thinking and fosters empathy for the traveller who was ”othered” since her dress was not so ”typical” of how an American should look.

D. Fox Harrell, MIT Professor of Digital Media and AI at CSAIL, the Programme in Comparative Media Studies/Writing (CMS), and the Institute for Data, Systems, and Society (IDSS), and Founding Director of the MIT Centre for Advanced Virtuality, collaborated with Yildirim on the project.

“While a simulation cannot offer someone another person’s actual experiences, a system like this can help individuals observe and understand patterns at work when it comes to issues like bias,” opined Harrell, a Co-author and Designer on this project.

The engaging, immersive, interactive tale aspired to emotionally influence people, allowing users’ perspectives to be modified and widened. This simulation also uses an interactive narrative engine, which generates a variety of answers to in-game encounters based on a model of how people are classified socially.

The tool allows participants to change their position in the simulation by responding to each prompt with the appropriate response. Their feelings for the other two characters will be influenced by their decisions.

“On the Plane” uses artificial intelligence knowledge representation techniques managed by probabilistic finite state machines, a tool often used in machine learning systems for pattern recognition, to animate each avatar. The AI can customise the characters’ body language and gestures.

Harrell and Co-author Sercan Sengün called for virtual system designers to be more inclusive of Middle Eastern identities and cultures in a 2018 study based on work done in partnership between MIT CSAIL and the Qatar Computing Research Institute. They argued that allowing users to personalise virtual avatars that indicate their background will enable players to engage in a more supportive experience. “On the Plane” four years later achieves a similar goal, bringing a Muslim’s perspective into an immersive experience.