Bringing together education, research and
industry, the city of Grenoble is today one of the most important research,
technology, and innovation centers in France, as well as Europe.
The city in south-eastern France, at the
foot of the French Alps, is home to multinational enterprises such as
STMicroelectronics, Schneider Electric, Caterpillar, Hewlett Packard and many
more. In fact, five out of the top 10 employers in Grenoble are foreign-owned
private companies. It has a thriving startup ecosystem and several
international research centres and labs. The city hosts 7000 doctoral students,
half of whom are from abroad.
On the sidelines of IoT Asia 2018, OpenGov
had the opportunity to speak to Dr Claus Habfast, Municipal Councillor of
Grenoble and Vice President of the Greater Grenoble City Area, the local
government structure responsible for smart city projects in energy, transport,
economic development, waste collection, water & sanitation.
the carbon footprint
Dr Habfast explained that in the mountains climate
change is twice as strong as in the plains. This directly affects Grenoble. Grenoble
is affected by the melting of the glaciers, by the complete disturbance of the
water circuit in the mountains and by difficult weather conditions that changes
the way people can live in the mountains.
“So we are feeling the responsibility as a
city in the mountains to participate in the global effort to reduce the carbon
footprint,” Dr Habfast said.
The carbon sources are linked to urban
transport and to heating. So, the city is attempting to make people use mass
transport, encouraging electric vehicles and urging people to use bicycles and
As an example, Dr Habfast explained that
the bicycle is the most environment friendly way of urban transport and Grenoble
has now reached 15% of bicycle use for work-to-home transport, which is second place
Grenbole has a bike rental system where people
can rent bikes at a very modest fee. This system is completely integrated into
the mass transport infrastructure. For people who don’t want to use the bike, the
city has developed mass transport with a system of trams and buses. As a next
step, the city is working on using the empty seats in cars.
Dr Habfast said, “Because for each driver,
you normally have three empty seats in the car, which uses energy and causes
With regards to urban heating, the city
administration is putting into place new ways of energy provision for heating and
general energy use, in order to reduce energy consumption.
Grenoble has one of the densest networks of
urban heating in France and it is being converted from oil to wood. The city is
also generating energy from waste.
In parallel, there is a large programme
wherein the city government helps building owners to install thermal insulation
in their buildings, and in some cases, even subsidise the activity.
requirement for the next steps
Till now, the steps taken by the city
administration to reduce the carbon footprint have not involved significant use
of big data.
But now that the first steps have been
done, data is required to go further. For example, data will be required to
develop smart grids and use wind and photovoltaic power. Because these renewable
sources of energy are intermittent, and the electricity is not always generated
at the moment when it is required, flexible smart grids with communication between the utility and
its customers, sensing along the transmission lines and load adjustment, play
an essential role in successful integration of renewable energy systems.
Data will also be required for the
carpooling initiative mentioned earlier which aims to reduce unused seats in
cars, because there is a need to know and understand the movement of cars and
match demand and supply.
“For all this we need data. Making data
available for these public services is at the core of our current initiatives,”
Dr Habfast said.
need to ensure security and privacy of citizen data
We asked Dr Habfast about the imperative to
protect citizen data in the context of increasing volumes of personal data
being collected and the European Union’s General Data Protection Regulation (GDPR), which
will be applicable from May 2018 onwards.
He pointed out a contradictory attitude in
people’s attitude to sharing data. Most people do not hesitate to have a smart
device, such as Alexa or Google Home in their homes. These devices capture
everything that is said in the room through a microphone and the data is sent
to the cloud (data centres around the world) for analysis.
But when the same people are asked to send
half-hourly data on electricity consumption via a smart meter to the local
electricity provider, they might object saying that they don’t want the electricity
provider to know about their energy use profile.
This discrepancy has to be overcome, and it
is a matter of trust. Dr Habfast said, “People have to trust that the data that
they give is not going to be used against their interest and that their privacy
is not violated. And the second and more important element, and this is new in
the European law, they have to be able to withdraw all their data. All data
that has been collected in the past has to be deleted.”
Handing people control over their data could
help in engendering trust. So that people can decide application by
application, service by service, whether you release your personal data in
order to blend it into a service or whether you refuse.
“That for us is an important element, which
we would like to implement when we use data for the public services in order to
reduce the carbon footprint.”
challenge of ensuring inclusive growth
Though Grenoble has lost of its
manufacturing base, the city is today a R&D and innovation hub.
“Our industrial activity now is more
R&D than manufacturing. It needs to be high value-add in order to be
sustainable as industrial activity in a country like France,” Dr Habfast noted.
With its universities, big companies’ R&D
establishments and startups, one part of the population in Grenoble is prepared
for and is thriving in the 21st century.
But then there are also people who don’t
have a strong academic background, very often don’t even have a high school
degree, who face difficulties in finding jobs leading to high youth
unemployment. This challenge of driving inclusive growth is an urgent one. The
rise of automation and developments in emerging technologies, such as
artificial intelligence, could further compound the situation.
“We don’t want to have a society or a city
running at two speeds. It is important
that we don’t solve our problems only for those who already know how to help
themselves. We must not forget those who need more care and more help, to not
be left behind,” Dr Habfast told us.
He added that the youth unemployment for
the ‘non-properly qualified’ young people reflects what will happen to other countries
or territories in the future.
To tackle the issue training is important.
At the same time, it has to be shown to young people that it pays off to train.
Because training and learning is an effort and if they are unable to find a job
after wards, because they are from a minority, it can lead to discontent and
loss of motivation for others.
The same goes for girls. Dr Habfast said
that girls should learn coding and they must not be afraid to enter into tech
jobs because that is where the opportunities are.
Thus, Grenoble is a bit further down the
road in terms of the visible impact of technology, and the city is trying to
learn how to avoid the social drawbacks of technological revolution.
One key aspect of the government’s approach
is to regulate to deal with the ramifications of technology. For instance,
Airbnb is not a problem in Grenoble at the moment. But if it becomes a problem,
the city might introduce regulations, like in other European cities, such as Barcelona,
Amsterdam or Berlin, so that it does not destroy the character of the inner
city and the social structure.
Dr Habfast cautioned that the government
always has to regulate with the citizens and stakeholder bodies. Government must
keep an open ear and listen to the people.
For example, next year, the city will stop
access to the inner part of the metropolitan area for all polluting diesel cars
and delivery vehicles. The delivery lorries will either have to have clean
engines or they have to work through a logistics platform.
It’s important to ensure that nobody is
excluded or left behind when introducing a regulation like this. Alternative
solutions have to be shown, if the regulation will make people change the way
The 21st century is frequently called the age of Artificial Intelligence (AI), prompting questions about its societal implications. It actively transforms numerous processes across various domains, and research ethics (RE) is no exception. Multiple challenges, encompassing accountability, privacy, and openness, are emerging.
Research Ethics Boards (REBs) have been instituted to guarantee adherence to ethical standards throughout research. This scoping review seeks to illuminate the challenges posed by AI in research ethics and assess the preparedness of REBs in evaluating these challenges. Ethical guidelines and standards for AI development and deployment are essential to address these concerns.
To sustain this awareness, the Oak Ridge National Laboratory (ORNL), a part of the Department of Energy, has joined the Trillion Parameter Consortium (TPC), a global collaboration of scientists, researchers, and industry professionals. The consortium aimed to address the challenges of building large-scale artificial intelligence (AI) systems and advancing trustworthy and reliable AI for scientific discovery.
ORNL’s collaboration with TPC aligns seamlessly with its commitment to developing secure, reliable, and energy-efficient AI, complementing the consortium’s emphasis on responsible AI. With over 300 researchers utilising AI to address Department of Energy challenges and hosting the world’s most powerful supercomputer, Frontier, ORNL is well-equipped to significantly contribute to the consortium’s objectives.
Leveraging its AI research and extensive resources, the laboratory will be crucial in addressing challenges such as constructing large-scale generative AI models for scientific and engineering problems. Specific tasks include creating scalable model architectures, implementing effective training strategies, organising and curating data for model training, optimising AI libraries for exascale computing platforms, and evaluating progress in scientific task learning, reliability, and trust.
TPC strives to build an open community of researchers developing advanced large-scale generative AI models for scientific and engineering progress. The consortium plans to voluntarily initiate, manage, and coordinate projects to prevent redundancy and enhance impact. Additionally, TPC seeks to establish a global network of resources and expertise to support the next generation of AI, uniting researchers focused on large-scale AI applications in science and engineering.
Prasanna Balaprakash, ORNL R&D staff scientist and director of the lab’s AI Initiative, said, “ORNL envisions being a critical resource for the consortium and is committed to ensuring the future of AI across the scientific spectrum.”
Further, as an international organisation that supports education, science, and culture, The United Nations Educational, Scientific and Cultural Organisation (UNESCO) has established ten principles of AI ethics regarding scientific research.
- Beneficence: AI systems should be designed to promote the well-being of individuals, communities, and the environment.
- Non-maleficence: AI systems should avoid causing harm to individuals, communities, and the environment.
- Autonomy: Individuals should have the right to control their data and to make their own decisions about how AI systems are used.
- Justice: AI systems should be designed to be fair, equitable, and inclusive.
- Transparency: AI systems’ design, operation, and outcomes should be transparent and explainable.
- Accountability: There should be clear lines of responsibility for developing, deploying, and using AI systems.
- Privacy: The privacy of individuals should be protected when data is collected, processed, and used by AI systems.
- Data security: Data used by AI systems should be secure and protected from unauthorised access, use, disclosure, disruption, modification, or destruction.
- Human oversight: AI systems should be subject to human management and control.
- Social and environmental compatibility: AI systems should be designed to be compatible with social and ecological values.
Since 1979, ORNL’s AI research has gained a portfolio with the launch of the Oak Ridge Applied Artificial Intelligence Project to ensure the alignment of UNESCO principles. Today, the AI Initiative focuses on developing secure, trustworthy, and energy-efficient AI across various applications, showcasing the laboratory’s commitment to advancing AI in fields ranging from biology to national security. The collaboration with TPC reinforces ORNL’s dedication to driving breakthroughs in large-scale scientific AI, aligning with the world agenda in implementing AI ethics.
All institutions rely on IT to deliver services. Disruption, degradation, or unauthorised alteration of information and systems can impact an institution’s condition, core processes, and risk profile. Furthermore, organisations are expected to make quick decisions due to the rapid pace of dynamic transformation. To stay competitive, data is a crucial resource for tackling this challenge.
Hence, data protection is paramount in safeguarding the integrity and confidentiality of this invaluable resource. Organisations must implement robust security measures to prevent unauthorised access, data breaches, and other cyber threats that could compromise sensitive information.
Prasert Chandraruangthong, Minister of Digital Economy and Society, supports the National Agenda in fortifying personal data protection with Asst Prof Dr Veerachai Atharn, Assistant Director of the National Science and Technology Development Agency, Science Park, and Dr Siwa Rak Siwamoksatham, Secretary-General of the Personal Data Protection Committee, gave a welcome speech. It marks that the training aims to bolster the knowledge about data protection among the citizens of Thailand.
Data protection is not only for the organisation, but it also becomes responsible for the individuals, Minister Prasert Chandraruangthong emphasises. Thailand has collaboratively developed a comprehensive plan regarding the measures to foster a collective defence against cyber threats towards data privacy.
The Ministry of Digital Economy and Society and the Department of Special Investigation (DSI) will expedite efforts to block illegal trading of personal information. Offenders will be actively pursued, prosecuted, and arrested to ensure a swift and effective response in safeguarding the privacy and security of individuals’ data.
This strategy underscores the government’s commitment to leveraging digital technology to fortify data protection measures and create a safer online environment for all citizens by partnering with other entities.
Further, many countries worldwide share these cybersecurity concerns. In Thailand’s neighbouring country, Indonesia, the government has noticed that data privacy is a crucial aspect that demands attention. Indonesia has recognised the paramount importance of safeguarding individuals’ privacy and has taken significant steps to disseminate stakeholders to gain collaborative effort in fortifying children’s security.
Nezar Patria, Deputy Minister of the Ministry of Communication and Information of Indonesia, observed that children encounter abundant online information and content. It can significantly lead them to unwanted exposure and potential risks as artificial intelligence has evolved.
Patria stressed the crucial role of AI, emphasising the importance of implementing automatic content filters and moderation to counteract harmful content. AI can be used to detect cyberbullying through security measures and by recognising the patterns of cyberbullying perpetrators. It can also identify perpetrators of online violence through behavioural detection in the digital space and enhance security and privacy protection. Moreover, AI can assist parents in monitoring screen time, ensuring that children maintain a balanced and healthy level of engagement with digital devices.
Conversely, the presence of generative AI technology, such as deep fake, enables the manipulation of photo or video content, potentially leading to the creation of harmful material with children as victims. Patria urged collaborative discussions among all stakeholders involved in related matters to harness AI technology for the advancement and well-being of children in Indonesia.
In the realm of digital advancements, cybersecurity is the priority right now. Through public awareness campaigns, workshops, and training initiatives, nations aim to empower citizens with the knowledge to identify, prevent, and respond to cyber threats effectively. The ongoing commitment to cybersecurity reflects the country’s dedication to ensuring a secure and thriving digital future for its citizens and the broader digital community.
The Western Australian government has unveiled a comprehensive set of measures aimed at reducing bureaucratic hurdles, alleviating work burdens, and fostering a conducive environment for educators to focus on teaching. The region’s Education Minister, Dr Tony Buti, spearheading this initiative, took into account the insights from two pivotal reports and explored the potential of AI tools to revamp policies and processes.
In the wake of an in-depth review into bureaucratic complexities earlier this year, Minister Buti carefully considered the outcomes of the Department of Education’s “Understanding and Reducing the Workload of Teachers and Leaders in Western Australian Public Schools” review and the State School Teachers’ Union’s “Facing the Facts” report. Both reports shed light on the escalating intricacies of teaching and the primary factors contributing to workloads for educators, school leaders, and institutions.
Embracing technology as a key driver for change, the government is contemplating the adoption of AI, drawing inspiration from successful trials in other Australian states. The objective is to modernise and enhance the efficiency of professional learning, lesson planning, marking, and assessment development. AI tools also hold promise in automating tasks such as excursion planning, meeting preparations, and general correspondence, thereby mitigating the burden on teachers.
Collaborating with the School Curriculum and Standards Authority, as well as the independent and Catholic sectors, the government aims to explore AI applications to streamline curriculum planning and elevate classroom teaching. The integration of AI is envisioned to usher in a new era of educational efficiency.
In consultation with unions, associations, principals, teachers, and administrative staff, the Department of Education has identified a range of strategies to immediately, in the short term, and in the long term, alleviate the workload for public school educators.
Among these strategies, a noteworthy allocation of AU$2.26 million is earmarked for a trial involving 16 Complex Behaviour Support Coordinators. These coordinators will collaborate with public school leaders to tailor educational programs for students with disabilities and learning challenges.
Furthermore, a pioneering pilot project, jointly funded by State and Federal Governments, seeks to digitise paper-based school forms, reducing red tape and providing a consistent, accessible, and efficient method for sharing information online. Each digital submission is anticipated to save 30 minutes of staff time compared to its paper-based counterpart. Additionally, efforts are underway to simplify the process related to the exclusion of public school students while enhancing support to schools.
As part of the broader effort to support schools, the ‘Connect and Respect’ program, outlining expectations for appropriate relationships with teachers, is set to undergo expansion. This expansion includes the creation of out-of-office templates, and establishing boundaries on when it is acceptable to contact staff after working hours. The overarching goal is to minimise misunderstandings and conflicts, fostering a healthier work-life balance for teaching staff.
The Education Minister expressed his commitment to reducing administrative tasks that divert teachers from their core mission of educating students. Acknowledging the pervasive nature of this challenge, the Minister emphasised the government’s determination to create optimal conditions for school staff to focus on their primary roles.
In his remarks, the Minister underscored the significance of these initiatives, emphasising their positive impact in ensuring that teachers can dedicate their time and energy to helping every student succeed. The unveiled measures represent a pivotal step toward realising the government’s vision of a streamlined, technology-enhanced educational landscape that prioritises the well-being of educators and, ultimately, the success of students.
Liming Zhu and Qinghua Lu, leaders in the study of responsible AI at CSIRO and Co-authors of Responsible AI: Best Practices for Creating Trustworthy AI Systems delve into the realm of responsible AI through their extensive work and research.
Artificial Intelligence (AI), currently a major focal point, is revolutionising almost all facets of life, presenting entirely novel methods and approaches. The latest trend, Generative AI, has taken the helm, crafting content from cover letters to campaign strategies and conjuring remarkable visuals from scratch.
Global regulators, leaders, researchers and the tech industry grapple with the substantial risks posed by AI. Ethical concerns loom large due to human biases, which, when embedded in AI training, can exacerbate discrimination. Mismanaged data without diverse representation can lead to real harm, evidenced by instances like biased facial recognition and unfair loan assessments. These underscore the need for thorough checks before deploying AI systems to prevent such harmful consequences.
The looming threat of AI-driven misinformation, including deepfakes and deceptive content, concerning for everyone, raising fears of identity impersonation online. The pivotal question remains: How do we harness AI’s potential for positive impact while effectively mitigating its capacity for harm?
Responsible AI involves the conscientious development and application of AI systems to benefit individuals, communities, and society while mitigating potential negative impacts, Liming Zhu and Qinghua Lu advocate.
These principles emphasise eight key areas for ethical AI practices. Firstly, AI should prioritise human, societal, and environmental well-being throughout its lifecycle, exemplified by its use in healthcare or environmental protection. Secondly, AI systems should uphold human-centred values, respecting rights and diversity. However, reconciling different user needs poses challenges. Ensuring fairness is crucial to prevent discrimination, highlighted by critiques of technologies like Amazon’s Facial Recognition.
Moreover, maintaining privacy protection, reliability, and safety is imperative. Instances like Clearview AI’s privacy breaches underscore the importance of safeguarding personal data and conducting pilot studies to prevent unforeseen harms, as witnessed with the chatbot Tay generating offensive content due to vulnerabilities.
Transparency and explainability in AI use are vital, requiring clear disclosure of AI limitations. Contestability enables people to challenge AI outcomes or usage, while accountability demands identification and responsibility from those involved in AI development and deployment. Upholding these principles can encourage ethical and responsible AI behaviour across industries, ensuring human oversight of AI systems.
Identifying problematic AI behaviour can be challenging, especially when AI algorithms drive high-stakes decisions impacting specific individuals. An alarming instance in the U.S. resulted in a longer prison sentence determined by an algorithm, showcasing the dangers of such applications. Qinghua highlighted the issue with “black box” AI systems, where users and affected parties lack insight into and means to challenge decisions made by these algorithms.
Liming emphasised the inherent complexity and autonomy of AI, making it difficult to ensure complete compliance with responsible AI principles before deployment. Therefore, user monitoring of AI becomes crucial. Users must be vigilant and report any violations or discrepancies to the service provider or authorities.
Holding AI service and product providers accountable is essential in shaping a future where AI operates ethically and responsibly. This call for vigilance and action from users is instrumental in creating a safer and more accountable AI landscape.
Australia is committed to the fair and responsible use of technology, especially artificial intelligence. During discussions held on the sidelines of the APEC Economic Leaders Meeting in San Francisco, the Australian Prime Minister unveiled the government’s commitment to responsibly harnessing generative artificial intelligence (AI) within the public sector.
The DTA-facilitated collaboration showcases the Australian Government’s proactive investment in preparing citizens for job landscape changes. Starting a six-month trial from January to June 2024, Australia leads globally in deploying advanced AI services. This initiative enables APS staff to innovate using generative AI, aiming to overhaul government services and meet evolving Australian needs.
Vietnamese companies are actively implementing several measures to ready themselves for an artificial intelligence (AI)-centric future. According to an industry survey released recently, 99% of organisations have either established a robust AI strategy or are currently in the process of developing one.
Over 87% of organisations are categorised as either fully or partially prepared, with only 2% falling into the category of not prepared. This indicated a significant level of focus by C-Suite executives and IT leadership, possibly influenced by the unanimous attitude among respondents that the urgency to implement AI technologies in their organisations has heightened in the past six months. Notably, IT infrastructure and cybersecurity emerged as the foremost priority areas for AI deployments. However, only 27% of organisations in Vietnam are fully prepared to deploy and leverage AI-powered technologies.
The survey included over 8,000 global companies and was created in response to the rapid adoption of AI, a transformative shift affecting nearly every aspect of business and daily life. The report emphasises the readiness of companies to leverage and implement AI, revealing significant gaps in crucial business pillars and infrastructures that pose substantial risks in the near future.
The survey was an assessment of companies that were examined on 49 different metrics across these six pillars to determine a readiness score for each, as well as an overall readiness score for the respondents’ organisation. Each indicator was weighted individually based on its relative importance in achieving readiness for the respective pillar. It classified organisations into four groups—Pacesetters (fully prepared), Chasers (moderately prepared), Followers (limited preparedness), and Laggards (unprepared)—based on their overall scores.
Although AI adoption has been steadily advancing for decades, the recent strides in Generative AI, coupled with its increased public accessibility in the past year, have heightened awareness of the challenges, transformations, and new possibilities presented by this technology.
Despite 92% of respondents acknowledging that AI will have a substantial impact on their business operations, it has raised concerns regarding data privacy and security. The findings showed that companies experience the most challenges when it comes to leveraging AI alongside their data. 68% of respondents acknowledged that this is due to data existing in silos across their organisations.
As per an industry expert, in the race to implement AI solutions, companies should assess where investments are needed to ensure their infrastructure can best support the demands of AI workloads. It is equally important for organisations to monitor the context in which AI is used, ensuring factors such as ROI, security, and responsibility.
The country is working to foster a skilled workforce in AI to actively contribute to the expansion of Vietnam’s AI ecosystem and its sustainability. As per data from the World Intellectual Property Organisation (WIPO) last year, there were over 1,600 individuals in Vietnam who were either studying or engaged in AI-related fields. However, the actual number of professionals actively working in AI within Vietnam was relatively low, with only around 700 individuals, including 300 experts, involved in this specialised work. Considering the substantial IT workforce of nearly 1 million employees in Vietnam, the availability of AI human resources remains relatively limited.
To tackle this challenge, businesses can recruit AI experts globally or collaborate with domestic and international training institutions to enhance the skills of existing talent. They can also partner with universities to offer advanced degrees in data science and AI for the current engineering workforce, fostering synergy between academic institutions and industry demands.
A research initiative spearheaded by the University of Wollongong (UOW) has secured a substantial grant of AU$445,000 under the Australian Research Council (ARC) Linkage Projects Scheme. The primary focus of this project is to enhance the security protocols for unmanned aerial vehicles (UAVs), commonly known as drones, in the face of potential adversarial machine-learning attacks. The funding underscores the significance of safeguarding critical and emerging technologies, aligning with the strategic vision of the Australian Government.
Heading the project is Distinguished Professor Willy Susilo, an internationally recognised authority in the realms of cyber security and cryptography. Professor Susilo, expressing the overarching goal of the research, emphasised the deployment of innovative methodologies to fortify UAV systems against adversarial exploits targeting vulnerabilities within machine learning models.
Collaborating on this ambitious endeavour are distinguished researchers from the UOW Faculty of Engineering and Information Sciences. The team comprises Associate Professor Jun Yan, Professor Son Lam Phung, Dr Yannan Li, Associate Professor Yang-Wai (Casey) Chow, and Professor Jun Shen. Collectively, their expertise spans various domains essential to the comprehensive understanding and mitigation of cyber threats posed to UAVs.
Highlighting the broader implications of the project, Professor Susilo underscored the pivotal role UAV-related technologies play in contributing to Australia’s economic, environmental, and societal well-being. From facilitating logistics and environmental monitoring to revolutionising smart farming and disaster management, the potential benefits are vast. However, a significant hurdle lies in the vulnerability of machine learning models embedded in UAV systems to adversarial attacks, impeding their widespread adoption across industries.
The project’s core objective revolves around developing robust defences tailored to UAV systems, effectively shielding them from adversarial machine-learning attacks. The research team aims to scrutinise various attack vectors on UAVs and subsequently devise countermeasures to neutralise these threats. By doing so, they anticipate a substantial improvement in the security posture of UAV systems, thus fostering increased reliability in their application for transport and logistics services.
Professor Susilo emphasised that the enhanced security measures resulting from this research would play a pivotal role in bolstering the widespread adoption of UAVs, particularly in supporting both urban and regional communities. This is particularly pertinent given the multifaceted advantages UAVs offer, ranging from efficiency in logistics to rapid response capabilities in disaster management scenarios.
The significance of the project extends beyond academic realms, with Deloitte Access Economics projecting profound economic and employment impacts. The Australian UAV industry is expected to generate a substantial 5,500 new jobs annually, contributing significantly to the nation’s Gross Domestic Product with an estimated increase of AU$14.5 billion by 2040. Additionally, the research outcomes are anticipated to yield cost savings of AU$9.3 billion across various sectors.
The ARC Linkage Program, which serves as the backbone for this collaborative initiative, actively promotes partnerships between higher education institutions and other entities within the research and innovation ecosystem. Noteworthy partners in this venture include Sky Shine Innovation, Hover UAV, Charles Sturt University, and the University of Southern Queensland, collectively contributing to the multidimensional expertise required for the project’s success.
The UOW-led project represents a concerted effort to fortify the foundations of UAV technology by addressing critical vulnerabilities posed by adversarial machine-learning attacks. Beyond the academic realm, the outcomes of this research hold the promise of reshaping Australia’s technological landscape, ushering in an era of increased reliability, economic growth, and job creation within the burgeoning UAV industry.
The National Research and Innovation Agency (BRIN), through the Centre for Artificial Intelligence and Cyber Security Research, is currently developing a research project involving implementing Artificial Intelligence (AI) algorithms for diagnosing malaria. This research aimed to design and build a computer-based diagnosis system enriched with the implementation of AI algorithms to determine the health status of patients related to malaria.
Through AI integration, this system can identify whether a patient is affected by malaria and continue the diagnostic process by identifying the species of plasmodia and the life stage attacking red blood cells.
This step enhanced the accuracy of diagnosing malaria and opened opportunities for developing more precise and customised treatments.
Artificial intelligence significantly contributes to the efficiency of diagnostic processes, providing more accurate results and enabling faster and more effective treatment for patients infected with malaria.
AI technology in the malaria diagnosis process reflects a development in medicine and health technology, expanding the potential to detect and manage infectious diseases.
Thus, this research not only reflects scientific progress but also has the potential to make a significant positive impact on global efforts to combat infectious diseases, particularly malaria.
Currently, malaria is predominantly a concern in tropical and subtropical regions. Three diagnostic methods exist to identify plasmodium parasites in the blood: Rapid Diagnostic Test, Polymerase Chain Reaction, and peripheral blood smear microphotograph, which has become the standard.
The Head of the Centre for Artificial Intelligence and Cyber Security Research, Anto Satriyo, emphasised that the morphology of plasmodia changes over time, so the diagnostic system is built to identify each life stage of each parasite. “We also developed a system to automate the diagnosis using Arduino, as hundreds of fields of view (thick smear: 200 leucocytes, thin smear: 500-1000 erythrocytes) need to be analysed for the final diagnosis decision, especially during Mass Blood Surveys in malaria-prone areas, especially in Eastern Indonesia,” he explained.
The Thick Blood Smear Microphotograph CAD Malaria system, previously developed through a diagnostic process, begins with reading thick blood smear slides in the first field of view. After being captured by the camera, the image is processed by the application to determine the presence of malaria parasites. “The image is then saved as data. After completion, the motor control system will shift the slide to the right to obtain the second adjacent field of view. The next process is taking images of the second field of view for analysis and storage. This process is repeated until the minimum number of fields of view diagnosed is reached,” he asserted.
Experimental data were collected from various regions in Indonesia by the Malaria Laboratory of the Eijkman Centre for Molecular Biology Research while they built the diagnosis system. “It is envisioned that the system can be used to assist field officers so that diagnoses can be made faster and more accurately,” he concluded.
AI technology in malaria diagnosis represents a significant breakthrough in medical efforts to improve the accuracy, efficiency, and speed of the diagnostic process for this disease. Beyond clinical benefits, the development of this system also drives advancements in knowledge and technology in the fields of artificial intelligence and medicine.
By continuing to explore the potential of this technology in the future, it will not only be beneficial in diagnosing malaria but will also contribute to understanding and managing other infectious diseases. As a result, applying artificial intelligence technology in the context of malaria diagnosis opens the path toward more advanced, responsive, and targeted healthcare.