
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
Given
technological advancements and the rapid proliferation of Internet of Things
(IoT), our world is increasingly interconnected. Governments and businesses
across the globe also seek to leverage technology to improve their products and
services to citizens and customers. While digital technologies present new
opportunities and transform the way we live and work, the digital disruption
also brings out new challenges, particularly in cybersecurity.
Recently,
OpenGov had the privilege to speak to Mr Stephan Neumeier, Managing Director
of Kaspersky Lab Asia Pacific, on the fast-changing cybersecurity
landscape in the Asia Pacific region and how organisations can better prepare
themselves to deal with cybersecurity threats.
When asked
to comment on how the cybersecurity landscape has evolved and some of the
emerging trends, Mr Neumeier shared some of his observations that 2017 has seen
“the most intensive of cybersecurity incidents”.
“Unfortunately,
most of what our researchers at Kaspersky Lab has projected to happen were
brought to fruition — espionage has gone mobile, APTs attacked enterprise
networks, financial attacks continued, a new wave of ransomware attacks came
about, critical ICS processes were disrupted, poorly secured IoT devices were
targeted, and even information warfare figured last year,” he said.
This year,
he saw a continuation of these attacks and much more as the themes and trends
build on each other, year after year, expanding further the threat landscape
where individuals, businesses and governments are relentlessly pursued and attacked.
Cybersecurity
challenges in Asia Pacific
As
different markets may face different challenges, depending on the region’s
capabilities to tackle and mitigate cybersecurity threats. OpenGov asked Mr
Neumeier his views on whether the Asia Pacific region face similar or unique
challenges compare to the rest of the world.
According
to him, the Asia Pacific market is very different and unique from other regions
globally, especially from a cultural perspective.
“It is a
very young region with a very significant number of millennials growing up. Not
to mention that it consists of the two most populated countries in the world,
China and India. As internet is becoming a major part of our lives, these young
generations require access to fast internet. To cater to these need, massive
investments are being made by respective countries in the APAC to
infrastructure to improve internet speed, making sure that their country are
not left behind and to keep up with the growth of the technology space,” he
said.
“With
that, in the last few years, infrastructures within the APAC countries are
beginning to have almost similar qualities as countries in Europe such as
Switzerland and Germany. However, from a cybersecurity perspective, the
awareness and understanding are not in the same level as those in these matured
countries and this is a huge challenge for the APAC region. This is why, in
this market, we should focus more in education and awareness of cybersecurity.”
On the
level of cybersecurity awareness in the region, Mr Neumeier pointed out that
although the region has a large number of active users of the Internet, there
still appears to be a low awareness of cybersecurity among Internet users in
this region.
Unfortunately,
this low level of cybersecurity awareness combined with high Internet usage
means that the Internet users in the region have been the prime targets of
cyber threat attacks such as when the Naikon APT targeted top-level government
agencies and civil military organisations or when the Wannacry and Petya
ransomware outbreak began or when the Mirai malware unleashed DDoS attacks.
“Additionally,
bring your own device (BYOD) is the big trend affecting how businesses operate
online, with 72% of companies expecting to use the concept extensively in the
near future, according to a survey by B2B International on behalf of Kaspersky
Lab. It’s inevitable that in any company, small or large, many employees will
use personal devices to connect to the corporate network and access
confidential data. That’s why companies need to implement policies that
safeguard both corporate and personal mobile devices,” he said.
“As a
society, we need to find ways to raise awareness of the risks associated with
online activity and develop effective methods to minimize these risks. There’s
technology at the core of any solution to tackle cybersecurity. But it is most
important to incorporate the human dimension of security, so we can effectively
mitigate the risk,” he added.
“All it
takes is a single person to bring it all down”
On the
biggest cybersecurity threats organisations face today, Mr Neumeier highlighted
the human factor in IT security, naming it “most common security
vulnerability”.
He cited
a recent global study conducted by Kaspersky Lab on cybersecurity
awareness involving about 5,000 businesses, which showed that organisations are
at a very real threat from within. According to this study, careless or
uninformed employees account for about 52% as the top cause of data leakage in
organisations worldwide.
“Taking a
closer look at this study, it reveals that despite the rapid proliferation of
destructive and more complex malware or Trojans, organisations should be more
concerned about their most important asset – their people,” he said.
“You can
have the best technical means and the most thought-out security policy but it
is never enough to protect your organisation from cyberthreats. All it takes is
a single person to bring it all down,” he added.
However,
he also pointed out that in most case, it is unintentional because that one
employee is unaware of threats and doesn’t have the basic cybersecurity
knowledge. According to the cited study, an approximate 65% of
organisations now already invest in employee cybersecurity training to close
this loophole.
Data
breaches affect both large and small organisations, with average losses from
data breaches currently passing the $1 million mark, a significant jump over
the past two years.
“For
enterprises, the average cost of one incident from March 2017 to February 2018
has reached $1.23 million, which is 24% higher from 2016-2017. For the SMBs,
it’s an average of $120,000 per cyber incident, which only costs $32,000 more
than a year ago,” he shared.
Mr
Neumeier iterated that whether it is a massive cybersecurity incident or
small-scale one, about 80% of them point to having been caused by human error.
“More than
ever, cybersecurity awareness and education are now critical requirements for
organisations of any size that is faced with the prospect of falling prey to
cybercriminals. At this point, there is a definitive need for organisations
regardless of size for solutions that provide centralized security management
of networks combined with training that zeroes in on the ‘how’ part of the
equation.”
Importance
of an effective cybersecurity strategy
Organisations
need to develop an effective and all-round cybersecurity strategy to protect
its assets and interests. Mr Neumeier recommended a cyclical approach of
continuous monitoring and analytics in building an effective cybersecurity
strategy.
“Twenty
years in the industry has taught us that what makes the most sense for
enterprise IT infrastructure to have true cybersecurity is to put in place a
cyclical adaptive security framework. This would have to be a flexible,
proactive multi-layered protection infrastructure which dynamically adapts and
responds to the ever-changing threat landscape,” he said.
According
to him, Kaspersky Lab’s security architecture is based on a cycle of
activities, comprised of four key segments namely Prevent, Detect, Respond, and
Predict.
He
continued to explain, “At the core of Kaspersky Lab’s True Cybersecurity
is HuMachine Intelligence, a seamless fusion of Big Data-based Threat
Intelligence, Machine Learning and Human Expertise. We have designed it so
because we believe we’re in a never-ending arms race — IT threats are dramatically
evolving day in and day out and here we are totally focused on following the
trail of hackers and further refining our solutions so we stay ahead of them.
It’s a continuous process.”
Key
components of cybersecurity resilience
On how
threat intelligence and endpoint detection can protect organisations and boost
organisations’ ability to respond to threats, Mr Neumeier stated that targeted
attacks have become one of the fastest growing threats in 2017.
“It used
to be that organisations employ endpoint protection platforms (EPP) to control
known threats such as traditional malware or unknown viruses which might use a
new form of malware directed at endpoints. However, cybercrime techniques have
significantly evolved such that attack processes have become aggressive and
expansive in recent years,” said Mr Neumeier.
It is
alarming that the specifics of the targeted attacks that cybercriminals use,
and the technological limitations of traditional endpoint protection products
mean that a conventional cybersecurity approach is no longer sufficient.
The cost
of incidents associated with simple threats is negligible at US$10,000 compared
with an advanced persistent threat (APT) attack which would set an organisation
for about US$926,000.
“To
withstand targeted attacks and APT-level threats on endpoints, organisations
need to consider EPP with endpoint detection and response (EDR)
functionalities,” the expert said.
“EDR is a
cybersecurity technology that addresses the need for real-time monitoring, focusing
heavily on security analytics and incident response on corporate endpoints. It
delivers true end-to-end visibility into the activity of every endpoint in the
corporate infrastructure, managed from a single console, together with valuable
security intelligence for use by an IT security expert in further investigation
and response,” he explained.
According
to Mr Neumeier, an organisation needs an EDR if it is looking at a proactive
detection of new or unknown threats, previously unidentified infections
penetrating it directly through endpoints and servers. This is achieved by
analysing events in the grey zone, home of those objects or processes included
in neither the “trusted” nor the “definitely malicious” zone.
Depending
on each organisation’s maturity and experience in the field of security, and
the availability of necessary resources, some businesses will find it most
effective to use their own expertise for endpoint security but will take
advantage of outsourced resources for more complex aspects.
Meanwhile,
they can build up in-house expertise with skills training, through access to a
threat intelligence portal and APT intelligence reporting, and using threat
data feeds. Or — particularly attractive for overwhelmed or understaffed
security departments — they can adopt third-party professional services from
the outset.
Kaspersky
Lab’s approach to endpoint protection includes the following components:
Kaspersky Endpoint Security, Kaspersky Endpoint Detection and Response, and
Kaspersky Cybersecurity Services.
For
organisations unable, for reasons of regulatory compliance, to release or
transfer any corporate data outside their environment, or that require complete
infrastructure isolation, Kaspersky Private Security Network provides most of
the benefits of global cloud-based threat intelligence as provided by Kaspersky
Security Network (KSN,) without any data ever leaving the controlled perimeter.
To
counteract advanced threats and targeted attacks, businesses need automated
tools and services designed to complement each other and help security teams
prevent most attacks, detect unique new threats rapidly, handle live attacks,
respond to attacks in a timely manner, and predict future threats.
On
prevention as a key line of defence, Mr Neumeier gave the following suggestions
on measured that organisations can take to prevent cybersecurity incidents:
“We cannot
emphasise it enough – that preventing cybersecurity incidents from happening or
damaging our organisation’s finances or reputation, starts with raising
awareness and education.”
In this,
he urged organisations to strengthen the weakest links, toughen the target
systems and assets, and improve the effectiveness of current solutions to keep
up with the modern threats.
At the
same time, Mr Neumeier emphasised the importance for organisations to be well
equipped with threat intelligence.
“This is
moving from a reactive security model to a proactive security model based on
risk management, continuous monitoring, more informed incident response and
threat hunting capabilities,” he said.
“As we say
at Kaspersky Lab, prediction is doing more to guard against future
threats. Having access to cybersecurity experts that will keep organisations
updated on the constantly-changing global threat landscape and will help them
test their systems and existing defenses is a vital element to help them adapt
and keep pace with emerging security challenges”.
Tips on how
to keep up with the fast-changing cybersecurity landscape
As we face
increasing cybersecurity challenges, what can organisations and individuals do
to protect themselves?
For
organisations, Mr Neumeier spoke on the importance of having cybersecurity trainings
and adopting a cyclical approach to cybersecurity strategy.
“Based on
how we conduct our cybersecurity trainings, here are two quick tips: One, avoid
abstract information and focus on certain practical skills. Second, instruct
different groups of employees differently,” he shared.
“Educating
the staff on the motivations of security policies, the importance of working
safely and how to contribute to the security of their organisations can help
mitigate the risk of security incidents and safeguard what is truly important –
their data.”
He also
underscored the importance of having a new mindset in the face of new threats.
Here are some of the best practices he shared on how individuals can to be
risk-ready in the world of advanced attacks and epidemic outbreaks:
1. Remember the weakest link. Be aware and
knowledgeable about cybersecurity.
2. Invest in technology. Shift your focus towards a
proactive protection approach that goes beyond prevention; should be
adaptive, advanced, predictive and involve human expertise
3. Back up
4. Encrypt
5. Secure your network with a strong password.
“There
exists today a great deal of highly-motivated cybercriminals who will try to
find all points of vulnerability in an Internet user or within an organisation
just to get what they want. Most of the time, the road to remediation and
recovery is complicated and expensive, whether the victim is an individual or
an institution,” he concluded.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
The 21st century is frequently called the age of Artificial Intelligence (AI), prompting questions about its societal implications. It actively transforms numerous processes across various domains, and research ethics (RE) is no exception. Multiple challenges, encompassing accountability, privacy, and openness, are emerging.

Research Ethics Boards (REBs) have been instituted to guarantee adherence to ethical standards throughout research. This scoping review seeks to illuminate the challenges posed by AI in research ethics and assess the preparedness of REBs in evaluating these challenges. Ethical guidelines and standards for AI development and deployment are essential to address these concerns.
To sustain this awareness, the Oak Ridge National Laboratory (ORNL), a part of the Department of Energy, has joined the Trillion Parameter Consortium (TPC), a global collaboration of scientists, researchers, and industry professionals. The consortium aimed to address the challenges of building large-scale artificial intelligence (AI) systems and advancing trustworthy and reliable AI for scientific discovery.
ORNL’s collaboration with TPC aligns seamlessly with its commitment to developing secure, reliable, and energy-efficient AI, complementing the consortium’s emphasis on responsible AI. With over 300 researchers utilising AI to address Department of Energy challenges and hosting the world’s most powerful supercomputer, Frontier, ORNL is well-equipped to significantly contribute to the consortium’s objectives.
Leveraging its AI research and extensive resources, the laboratory will be crucial in addressing challenges such as constructing large-scale generative AI models for scientific and engineering problems. Specific tasks include creating scalable model architectures, implementing effective training strategies, organising and curating data for model training, optimising AI libraries for exascale computing platforms, and evaluating progress in scientific task learning, reliability, and trust.
TPC strives to build an open community of researchers developing advanced large-scale generative AI models for scientific and engineering progress. The consortium plans to voluntarily initiate, manage, and coordinate projects to prevent redundancy and enhance impact. Additionally, TPC seeks to establish a global network of resources and expertise to support the next generation of AI, uniting researchers focused on large-scale AI applications in science and engineering.
Prasanna Balaprakash, ORNL R&D staff scientist and director of the lab’s AI Initiative, said, “ORNL envisions being a critical resource for the consortium and is committed to ensuring the future of AI across the scientific spectrum.”
Further, as an international organisation that supports education, science, and culture, The United Nations Educational, Scientific and Cultural Organisation (UNESCO) has established ten principles of AI ethics regarding scientific research.
- Beneficence: AI systems should be designed to promote the well-being of individuals, communities, and the environment.
- Non-maleficence: AI systems should avoid causing harm to individuals, communities, and the environment.
- Autonomy: Individuals should have the right to control their data and to make their own decisions about how AI systems are used.
- Justice: AI systems should be designed to be fair, equitable, and inclusive.
- Transparency: AI systems’ design, operation, and outcomes should be transparent and explainable.
- Accountability: There should be clear lines of responsibility for developing, deploying, and using AI systems.
- Privacy: The privacy of individuals should be protected when data is collected, processed, and used by AI systems.
- Data security: Data used by AI systems should be secure and protected from unauthorised access, use, disclosure, disruption, modification, or destruction.
- Human oversight: AI systems should be subject to human management and control.
- Social and environmental compatibility: AI systems should be designed to be compatible with social and ecological values.
Since 1979, ORNL’s AI research has gained a portfolio with the launch of the Oak Ridge Applied Artificial Intelligence Project to ensure the alignment of UNESCO principles. Today, the AI Initiative focuses on developing secure, trustworthy, and energy-efficient AI across various applications, showcasing the laboratory’s commitment to advancing AI in fields ranging from biology to national security. The collaboration with TPC reinforces ORNL’s dedication to driving breakthroughs in large-scale scientific AI, aligning with the world agenda in implementing AI ethics.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
All institutions rely on IT to deliver services. Disruption, degradation, or unauthorised alteration of information and systems can impact an institution’s condition, core processes, and risk profile. Furthermore, organisations are expected to make quick decisions due to the rapid pace of dynamic transformation. To stay competitive, data is a crucial resource for tackling this challenge.
Hence, data protection is paramount in safeguarding the integrity and confidentiality of this invaluable resource. Organisations must implement robust security measures to prevent unauthorised access, data breaches, and other cyber threats that could compromise sensitive information.
Prasert Chandraruangthong, Minister of Digital Economy and Society, supports the National Agenda in fortifying personal data protection with Asst Prof Dr Veerachai Atharn, Assistant Director of the National Science and Technology Development Agency, Science Park, and Dr Siwa Rak Siwamoksatham, Secretary-General of the Personal Data Protection Committee, gave a welcome speech. It marks that the training aims to bolster the knowledge about data protection among the citizens of Thailand.
Data protection is not only for the organisation, but it also becomes responsible for the individuals, Minister Prasert Chandraruangthong emphasises. Thailand has collaboratively developed a comprehensive plan regarding the measures to foster a collective defence against cyber threats towards data privacy.
The Ministry of Digital Economy and Society and the Department of Special Investigation (DSI) will expedite efforts to block illegal trading of personal information. Offenders will be actively pursued, prosecuted, and arrested to ensure a swift and effective response in safeguarding the privacy and security of individuals’ data.
This strategy underscores the government’s commitment to leveraging digital technology to fortify data protection measures and create a safer online environment for all citizens by partnering with other entities.
Further, many countries worldwide share these cybersecurity concerns. In Thailand’s neighbouring country, Indonesia, the government has noticed that data privacy is a crucial aspect that demands attention. Indonesia has recognised the paramount importance of safeguarding individuals’ privacy and has taken significant steps to disseminate stakeholders to gain collaborative effort in fortifying children’s security.
Nezar Patria, Deputy Minister of the Ministry of Communication and Information of Indonesia, observed that children encounter abundant online information and content. It can significantly lead them to unwanted exposure and potential risks as artificial intelligence has evolved.
Patria stressed the crucial role of AI, emphasising the importance of implementing automatic content filters and moderation to counteract harmful content. AI can be used to detect cyberbullying through security measures and by recognising the patterns of cyberbullying perpetrators. It can also identify perpetrators of online violence through behavioural detection in the digital space and enhance security and privacy protection. Moreover, AI can assist parents in monitoring screen time, ensuring that children maintain a balanced and healthy level of engagement with digital devices.
Conversely, the presence of generative AI technology, such as deep fake, enables the manipulation of photo or video content, potentially leading to the creation of harmful material with children as victims. Patria urged collaborative discussions among all stakeholders involved in related matters to harness AI technology for the advancement and well-being of children in Indonesia.
In the realm of digital advancements, cybersecurity is the priority right now. Through public awareness campaigns, workshops, and training initiatives, nations aim to empower citizens with the knowledge to identify, prevent, and respond to cyber threats effectively. The ongoing commitment to cybersecurity reflects the country’s dedication to ensuring a secure and thriving digital future for its citizens and the broader digital community.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
The Western Australian government has unveiled a comprehensive set of measures aimed at reducing bureaucratic hurdles, alleviating work burdens, and fostering a conducive environment for educators to focus on teaching. The region’s Education Minister, Dr Tony Buti, spearheading this initiative, took into account the insights from two pivotal reports and explored the potential of AI tools to revamp policies and processes.
In the wake of an in-depth review into bureaucratic complexities earlier this year, Minister Buti carefully considered the outcomes of the Department of Education’s “Understanding and Reducing the Workload of Teachers and Leaders in Western Australian Public Schools” review and the State School Teachers’ Union’s “Facing the Facts” report. Both reports shed light on the escalating intricacies of teaching and the primary factors contributing to workloads for educators, school leaders, and institutions.
Embracing technology as a key driver for change, the government is contemplating the adoption of AI, drawing inspiration from successful trials in other Australian states. The objective is to modernise and enhance the efficiency of professional learning, lesson planning, marking, and assessment development. AI tools also hold promise in automating tasks such as excursion planning, meeting preparations, and general correspondence, thereby mitigating the burden on teachers.
Collaborating with the School Curriculum and Standards Authority, as well as the independent and Catholic sectors, the government aims to explore AI applications to streamline curriculum planning and elevate classroom teaching. The integration of AI is envisioned to usher in a new era of educational efficiency.
In consultation with unions, associations, principals, teachers, and administrative staff, the Department of Education has identified a range of strategies to immediately, in the short term, and in the long term, alleviate the workload for public school educators.
Among these strategies, a noteworthy allocation of AU$2.26 million is earmarked for a trial involving 16 Complex Behaviour Support Coordinators. These coordinators will collaborate with public school leaders to tailor educational programs for students with disabilities and learning challenges.
Furthermore, a pioneering pilot project, jointly funded by State and Federal Governments, seeks to digitise paper-based school forms, reducing red tape and providing a consistent, accessible, and efficient method for sharing information online. Each digital submission is anticipated to save 30 minutes of staff time compared to its paper-based counterpart. Additionally, efforts are underway to simplify the process related to the exclusion of public school students while enhancing support to schools.
As part of the broader effort to support schools, the ‘Connect and Respect’ program, outlining expectations for appropriate relationships with teachers, is set to undergo expansion. This expansion includes the creation of out-of-office templates, and establishing boundaries on when it is acceptable to contact staff after working hours. The overarching goal is to minimise misunderstandings and conflicts, fostering a healthier work-life balance for teaching staff.
The Education Minister expressed his commitment to reducing administrative tasks that divert teachers from their core mission of educating students. Acknowledging the pervasive nature of this challenge, the Minister emphasised the government’s determination to create optimal conditions for school staff to focus on their primary roles.
In his remarks, the Minister underscored the significance of these initiatives, emphasising their positive impact in ensuring that teachers can dedicate their time and energy to helping every student succeed. The unveiled measures represent a pivotal step toward realising the government’s vision of a streamlined, technology-enhanced educational landscape that prioritises the well-being of educators and, ultimately, the success of students.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
Liming Zhu and Qinghua Lu, leaders in the study of responsible AI at CSIRO and Co-authors of Responsible AI: Best Practices for Creating Trustworthy AI Systems delve into the realm of responsible AI through their extensive work and research.

Artificial Intelligence (AI), currently a major focal point, is revolutionising almost all facets of life, presenting entirely novel methods and approaches. The latest trend, Generative AI, has taken the helm, crafting content from cover letters to campaign strategies and conjuring remarkable visuals from scratch.
Global regulators, leaders, researchers and the tech industry grapple with the substantial risks posed by AI. Ethical concerns loom large due to human biases, which, when embedded in AI training, can exacerbate discrimination. Mismanaged data without diverse representation can lead to real harm, evidenced by instances like biased facial recognition and unfair loan assessments. These underscore the need for thorough checks before deploying AI systems to prevent such harmful consequences.
The looming threat of AI-driven misinformation, including deepfakes and deceptive content, concerning for everyone, raising fears of identity impersonation online. The pivotal question remains: How do we harness AI’s potential for positive impact while effectively mitigating its capacity for harm?
Responsible AI involves the conscientious development and application of AI systems to benefit individuals, communities, and society while mitigating potential negative impacts, Liming Zhu and Qinghua Lu advocate.
These principles emphasise eight key areas for ethical AI practices. Firstly, AI should prioritise human, societal, and environmental well-being throughout its lifecycle, exemplified by its use in healthcare or environmental protection. Secondly, AI systems should uphold human-centred values, respecting rights and diversity. However, reconciling different user needs poses challenges. Ensuring fairness is crucial to prevent discrimination, highlighted by critiques of technologies like Amazon’s Facial Recognition.
Moreover, maintaining privacy protection, reliability, and safety is imperative. Instances like Clearview AI’s privacy breaches underscore the importance of safeguarding personal data and conducting pilot studies to prevent unforeseen harms, as witnessed with the chatbot Tay generating offensive content due to vulnerabilities.
Transparency and explainability in AI use are vital, requiring clear disclosure of AI limitations. Contestability enables people to challenge AI outcomes or usage, while accountability demands identification and responsibility from those involved in AI development and deployment. Upholding these principles can encourage ethical and responsible AI behaviour across industries, ensuring human oversight of AI systems.
Identifying problematic AI behaviour can be challenging, especially when AI algorithms drive high-stakes decisions impacting specific individuals. An alarming instance in the U.S. resulted in a longer prison sentence determined by an algorithm, showcasing the dangers of such applications. Qinghua highlighted the issue with “black box” AI systems, where users and affected parties lack insight into and means to challenge decisions made by these algorithms.
Liming emphasised the inherent complexity and autonomy of AI, making it difficult to ensure complete compliance with responsible AI principles before deployment. Therefore, user monitoring of AI becomes crucial. Users must be vigilant and report any violations or discrepancies to the service provider or authorities.
Holding AI service and product providers accountable is essential in shaping a future where AI operates ethically and responsibly. This call for vigilance and action from users is instrumental in creating a safer and more accountable AI landscape.
Australia is committed to the fair and responsible use of technology, especially artificial intelligence. During discussions held on the sidelines of the APEC Economic Leaders Meeting in San Francisco, the Australian Prime Minister unveiled the government’s commitment to responsibly harnessing generative artificial intelligence (AI) within the public sector.
The DTA-facilitated collaboration showcases the Australian Government’s proactive investment in preparing citizens for job landscape changes. Starting a six-month trial from January to June 2024, Australia leads globally in deploying advanced AI services. This initiative enables APS staff to innovate using generative AI, aiming to overhaul government services and meet evolving Australian needs.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
Vietnamese companies are actively implementing several measures to ready themselves for an artificial intelligence (AI)-centric future. According to an industry survey released recently, 99% of organisations have either established a robust AI strategy or are currently in the process of developing one.
Over 87% of organisations are categorised as either fully or partially prepared, with only 2% falling into the category of not prepared. This indicated a significant level of focus by C-Suite executives and IT leadership, possibly influenced by the unanimous attitude among respondents that the urgency to implement AI technologies in their organisations has heightened in the past six months. Notably, IT infrastructure and cybersecurity emerged as the foremost priority areas for AI deployments. However, only 27% of organisations in Vietnam are fully prepared to deploy and leverage AI-powered technologies.
The survey included over 8,000 global companies and was created in response to the rapid adoption of AI, a transformative shift affecting nearly every aspect of business and daily life. The report emphasises the readiness of companies to leverage and implement AI, revealing significant gaps in crucial business pillars and infrastructures that pose substantial risks in the near future.
The survey was an assessment of companies that were examined on 49 different metrics across these six pillars to determine a readiness score for each, as well as an overall readiness score for the respondents’ organisation. Each indicator was weighted individually based on its relative importance in achieving readiness for the respective pillar. It classified organisations into four groups—Pacesetters (fully prepared), Chasers (moderately prepared), Followers (limited preparedness), and Laggards (unprepared)—based on their overall scores.
Although AI adoption has been steadily advancing for decades, the recent strides in Generative AI, coupled with its increased public accessibility in the past year, have heightened awareness of the challenges, transformations, and new possibilities presented by this technology.
Despite 92% of respondents acknowledging that AI will have a substantial impact on their business operations, it has raised concerns regarding data privacy and security. The findings showed that companies experience the most challenges when it comes to leveraging AI alongside their data. 68% of respondents acknowledged that this is due to data existing in silos across their organisations.
As per an industry expert, in the race to implement AI solutions, companies should assess where investments are needed to ensure their infrastructure can best support the demands of AI workloads. It is equally important for organisations to monitor the context in which AI is used, ensuring factors such as ROI, security, and responsibility.
The country is working to foster a skilled workforce in AI to actively contribute to the expansion of Vietnam’s AI ecosystem and its sustainability. As per data from the World Intellectual Property Organisation (WIPO) last year, there were over 1,600 individuals in Vietnam who were either studying or engaged in AI-related fields. However, the actual number of professionals actively working in AI within Vietnam was relatively low, with only around 700 individuals, including 300 experts, involved in this specialised work. Considering the substantial IT workforce of nearly 1 million employees in Vietnam, the availability of AI human resources remains relatively limited.
To tackle this challenge, businesses can recruit AI experts globally or collaborate with domestic and international training institutions to enhance the skills of existing talent. They can also partner with universities to offer advanced degrees in data science and AI for the current engineering workforce, fostering synergy between academic institutions and industry demands.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
A research initiative spearheaded by the University of Wollongong (UOW) has secured a substantial grant of AU$445,000 under the Australian Research Council (ARC) Linkage Projects Scheme. The primary focus of this project is to enhance the security protocols for unmanned aerial vehicles (UAVs), commonly known as drones, in the face of potential adversarial machine-learning attacks. The funding underscores the significance of safeguarding critical and emerging technologies, aligning with the strategic vision of the Australian Government.
Heading the project is Distinguished Professor Willy Susilo, an internationally recognised authority in the realms of cyber security and cryptography. Professor Susilo, expressing the overarching goal of the research, emphasised the deployment of innovative methodologies to fortify UAV systems against adversarial exploits targeting vulnerabilities within machine learning models.
Collaborating on this ambitious endeavour are distinguished researchers from the UOW Faculty of Engineering and Information Sciences. The team comprises Associate Professor Jun Yan, Professor Son Lam Phung, Dr Yannan Li, Associate Professor Yang-Wai (Casey) Chow, and Professor Jun Shen. Collectively, their expertise spans various domains essential to the comprehensive understanding and mitigation of cyber threats posed to UAVs.
Highlighting the broader implications of the project, Professor Susilo underscored the pivotal role UAV-related technologies play in contributing to Australia’s economic, environmental, and societal well-being. From facilitating logistics and environmental monitoring to revolutionising smart farming and disaster management, the potential benefits are vast. However, a significant hurdle lies in the vulnerability of machine learning models embedded in UAV systems to adversarial attacks, impeding their widespread adoption across industries.
The project’s core objective revolves around developing robust defences tailored to UAV systems, effectively shielding them from adversarial machine-learning attacks. The research team aims to scrutinise various attack vectors on UAVs and subsequently devise countermeasures to neutralise these threats. By doing so, they anticipate a substantial improvement in the security posture of UAV systems, thus fostering increased reliability in their application for transport and logistics services.
Professor Susilo emphasised that the enhanced security measures resulting from this research would play a pivotal role in bolstering the widespread adoption of UAVs, particularly in supporting both urban and regional communities. This is particularly pertinent given the multifaceted advantages UAVs offer, ranging from efficiency in logistics to rapid response capabilities in disaster management scenarios.
The significance of the project extends beyond academic realms, with Deloitte Access Economics projecting profound economic and employment impacts. The Australian UAV industry is expected to generate a substantial 5,500 new jobs annually, contributing significantly to the nation’s Gross Domestic Product with an estimated increase of AU$14.5 billion by 2040. Additionally, the research outcomes are anticipated to yield cost savings of AU$9.3 billion across various sectors.
The ARC Linkage Program, which serves as the backbone for this collaborative initiative, actively promotes partnerships between higher education institutions and other entities within the research and innovation ecosystem. Noteworthy partners in this venture include Sky Shine Innovation, Hover UAV, Charles Sturt University, and the University of Southern Queensland, collectively contributing to the multidimensional expertise required for the project’s success.
The UOW-led project represents a concerted effort to fortify the foundations of UAV technology by addressing critical vulnerabilities posed by adversarial machine-learning attacks. Beyond the academic realm, the outcomes of this research hold the promise of reshaping Australia’s technological landscape, ushering in an era of increased reliability, economic growth, and job creation within the burgeoning UAV industry.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
The National Research and Innovation Agency (BRIN), through the Centre for Artificial Intelligence and Cyber Security Research, is currently developing a research project involving implementing Artificial Intelligence (AI) algorithms for diagnosing malaria. This research aimed to design and build a computer-based diagnosis system enriched with the implementation of AI algorithms to determine the health status of patients related to malaria.

Through AI integration, this system can identify whether a patient is affected by malaria and continue the diagnostic process by identifying the species of plasmodia and the life stage attacking red blood cells.
This step enhanced the accuracy of diagnosing malaria and opened opportunities for developing more precise and customised treatments.
Artificial intelligence significantly contributes to the efficiency of diagnostic processes, providing more accurate results and enabling faster and more effective treatment for patients infected with malaria.
AI technology in the malaria diagnosis process reflects a development in medicine and health technology, expanding the potential to detect and manage infectious diseases.
Thus, this research not only reflects scientific progress but also has the potential to make a significant positive impact on global efforts to combat infectious diseases, particularly malaria.
Currently, malaria is predominantly a concern in tropical and subtropical regions. Three diagnostic methods exist to identify plasmodium parasites in the blood: Rapid Diagnostic Test, Polymerase Chain Reaction, and peripheral blood smear microphotograph, which has become the standard.
The Head of the Centre for Artificial Intelligence and Cyber Security Research, Anto Satriyo, emphasised that the morphology of plasmodia changes over time, so the diagnostic system is built to identify each life stage of each parasite. “We also developed a system to automate the diagnosis using Arduino, as hundreds of fields of view (thick smear: 200 leucocytes, thin smear: 500-1000 erythrocytes) need to be analysed for the final diagnosis decision, especially during Mass Blood Surveys in malaria-prone areas, especially in Eastern Indonesia,” he explained.
The Thick Blood Smear Microphotograph CAD Malaria system, previously developed through a diagnostic process, begins with reading thick blood smear slides in the first field of view. After being captured by the camera, the image is processed by the application to determine the presence of malaria parasites. “The image is then saved as data. After completion, the motor control system will shift the slide to the right to obtain the second adjacent field of view. The next process is taking images of the second field of view for analysis and storage. This process is repeated until the minimum number of fields of view diagnosed is reached,” he asserted.
Experimental data were collected from various regions in Indonesia by the Malaria Laboratory of the Eijkman Centre for Molecular Biology Research while they built the diagnosis system. “It is envisioned that the system can be used to assist field officers so that diagnoses can be made faster and more accurately,” he concluded.
AI technology in malaria diagnosis represents a significant breakthrough in medical efforts to improve the accuracy, efficiency, and speed of the diagnostic process for this disease. Beyond clinical benefits, the development of this system also drives advancements in knowledge and technology in the fields of artificial intelligence and medicine.
By continuing to explore the potential of this technology in the future, it will not only be beneficial in diagnosing malaria but will also contribute to understanding and managing other infectious diseases. As a result, applying artificial intelligence technology in the context of malaria diagnosis opens the path toward more advanced, responsive, and targeted healthcare.