Peel away layers from a mummy still inside its casket, going down to the bones, seeing the amulets around its neck in astonishing detail. Conduct a virtual autopsy of a person killed in a traffic accident. Or solve a 5500-year-old murder mystery.
All this through data visualisation combined with an interactive, multi-touch, intuitive interface, so easy to understand, that even a child could use it.
Dr. Anders Ynnerman demonstrated the above and more during a presentation at EmTech Asia 2017. He spoke about a new paradigm of science communication, enabling science by combining scientists’ visual tools with data exploration and presenting it to the public so that they can experience the magic of being a scientist. These installations are being used in several leading museums in North America and Europe.
Dr. Ynnerman studies fundamental aspects of computer graphics and visualisation, in particular dealing with large scale and complex data sets. He is head of the division for Visual Information Technology and Applications (VITA) at the University of Linköping, Norrköping, Sweden and director at the Norrköping Visualization Center. He has held both positions since 1999. He is also one of the co-founders of the Center for Medical Image Science and Visualization.
From 1997 to 2002 he directed the Swedish National Supercomputer Centre and the Swedish National Infrastructure for Computing (SNIC) from 2002 to 2006 he directed . Dr. Ynnerman is a member of the Swedish Royal Academy of Engineering Sciences and a board member of the Swedish Foundation for Strategic Research.
OpenGov sat down with Dr. Ynnerman, learning about the challenges of rendering and interactivity and exciting potential applications of his work in designing interfaces for autonomous systems and decision support systems.
What are the challenges faced in this kind of visualisation?
The first challenge is the data size. It’s very large datasets. To enable interactivity, you need to render the data and generate the pictures very fast. There can be no lag at all. We are doing 60 fps. generating each image from scratch, from raw data. There is a lot of research behind that, to be able to do that in real time. Thanks to the development of graphics processing units that we are seeing on the market, we can do it now.
If you want to put it out into the public domain where you have inexperienced users, like we are doing, it presents another challenge. You need to have an interface that makes it possible to have easy interactions.
The final challenge is how do you tell the stories around the data. People get excited about the content that we have but it is primarily the story behind all the data and the technology. It’s a new way of communicating science in terms of semi-interactive stories and reaching a new level of engagement from the audience.
What kind of computing resources are required?
That’s the thing. It’s not very much. What I showed here is just using the standard graphics processing unit that you can buy for a few hundred dollars. The CT (Computed Tomography) scanner is the expensive part. Everything else is very cheap.
CT scanners have been there since the 70s. What are the advances which have happened since then which have made such visualisations possible?
In 1972, when the first CT scanner came out, you had images that were 32×32 px in resolutions and you just had one slice. What we have now in the 40 years that have passed since, is resolutions of 1 mm and you have 25000 slices of data coming out of the machine in 2 seconds.
A lot of the algorithmic work, the mathematics is much better, increasing the scanning speed. Another reason is the detector itself. The detector is much more sensitive. You are down to the level where you can count individual photons.
The detector in the CT Scanner we used is about 60 cm wide. There is a ton of equipment in the machine and it rotates around at 1000 times per minute and there are no vibrations, nothing.
Basically, it is a lot of engineering on mechanics, much better detectors and computational algorithms.
Usually we see advanced visualisation techniques being used by researchers. How did you come up with the idea of public oriented use?
We have a science centre that is very closely related to the research centre. We just saw the opportunity to really take scientific data, scientific methods and let people play with it. And they were fascinated with that. In the medical domain and beyond.
We did the same thing with astronomical data. We let people play with it. We tell stories about all kinds of data that have scientific relevance. This is the disruption in science communication. Instead of making animations, simulations, media productions, we do the storytelling using the data.
How do you see this interactive communication of science evolving?
I think there will be more and more work on increasing engagement. You can even turn visitors into producers themselves, so that as a visitor to a museum they leave behind a legacy. It could be a crowdsourcing of the discovery process.
We are going into two different directions, each with its own challenges.
One is going down to the micro-level, so that we can look at more and more detail, at things like human cells. So that you become an explorer of the data and the molecular structure inside a cell.
The other dimension we are looking at is time itself. What I showed now is static data. But if you start looking at the time resolution, then you have further interesting problems in terms of data handling but you can also tell very exciting stories. If you are going to visualise things, like blood flow, dynamically over time, then you have to replace say 20 GB of data on your scan for each time-step in the animation.
What are the other applications of this technology?
We see applications in virtually all scientific domains. We have a lot of people that are contacting us about visual interfaces for data analysis. People are dealing with data that is too large, with too many dimensions. We can help them to reduce and make sense of the data, so that they can make informed decisions.
It is also of relevance in things like autonomous systems, what kind of visual interface do you need for an autonomous system in the future.
Let me put it this way. 10 or 15 years ago, we had something called smart homes. A smart home was a place where you had display systems in each of the rooms telling you the temperature was, humidity and many other details.
I think that was a bad idea. Because we do not need all that information. Visualisation is all about reducing the amount of information.
In a smart home of the future, I would like to have some sort of representation, maybe a hologram. When I come back home, the hologram has a human dialogue with me. It tells me that there was a water leak, and a plumber was called to fixed it.
I want to enable the use of human-level communication interfaces. I want to be able to offload your cognitive load. I call these systems cognitive companions. Because they are your companion, they help you. The rooms with all the display systems don't help you. They are stressful.
This is true for all systems that we are dealing with in autonomy, that we need to have that kind of system that takes away complexity for you. So, that you can focus on the things that are important for you.
Decision support systems have the same problem. They are overloaded with information. You need to reduce that in such a way that you can trust the system and you can feed back your insights into it.
People will not drive cars anymore. The car will talk to you and ask you where do you want to go, which road do you want to take. Or say decision support when you are investing in the stock market. These systems are pushing things down to shorter and shorter time-frames.
But at least for the time being, for the foreseeable future, there will be humans involved in all of these decision and support systems. You cannot go any shorter than a minute, because then you cannot have a human decision. A minute is about the time that it takes for us to cognitively process inputs.
Then you need to have appropriate human interfaces to all these systems. The best human interfaces we have are the visual ones, the human eye is the best way of consuming informtion.
Seven intelligent robots have been installed in the wards of Yishun Community Hospital (YCH) to welcome patients and bring supplies to the bedside. These brand-new Temi Robots, known as Angel, were introduced to support nursing care so that nurses could focus their time and energy on clinical tasks while still giving patients a personal and meaningful touch.
These robots are loaded with patient education materials that patients and their caregivers can easily access, in addition to providing announcements and reminders throughout the day in all four major languages.
They also have a variety of features like games and entertainment, teleconference tools, and translation capabilities. YCH hopes to further improve patient engagement and satisfaction in its wards with the new addition.
A pilot project using Nao Robots was also used by YCH in previous years to assist dementia patients in their rehabilitation. Robot Therapy, which was started by the staff at YCH in 2018, is now a part of the therapy-related services offered there.
YCH, which is conceived of as a healing space for patients, offers intermediate care for recovering patients who do not require the intensive care services of an acute-care hospital. With rehabilitation and therapy at the heart of the hospital’s mission, the team was eager to investigate the potential of the innovation, Robot Therapy.
Because they can perform a wide range of tasks with little to no value added, hospital robots offer a reliable solution, freeing up doctors, nurses, and surgeons to focus on more high-value work. Robots have become an integral part of the healthcare industry, with many hospitals now using them to perform both surgical and administrative tasks.
In addition, prior to the arrival of Nao Robots in Singapore, a few local nursing homes used Paro, a robot that mimics the appearance, movement, and sounds of a baby seal. The therapeutic robot seal’s use is like animal therapy in that the robot helps to calm elderly people who have dementia or a loss of cognitive function.
The Nao robot, on the other hand, came with higher expectations: it can express emotions like laughter or sadness during interactions; it can interact and communicate with patients in different languages; and it has optic, audio, and impact sensors and motors to detect surroundings, interpret detection, and activate programmed responses.
Various interaction and language modes can be programmed into the Nao robot. The YCH Robot Therapy team took advantage of this by incorporating the robot into specific therapy sessions. This increased efficiency freed up nursing time, which could then be used for other care activities. Nao robot therapy sessions were trialled with 48 patients from the Dementia ward in October 2018.
Patients with Behavioural and Psychological Symptoms of Dementia (BPSD) require more care and attention, so this was chosen as the pilot ward. By introducing the Nao robot, YCH has increased patient engagement, motivate them to engage in social activities, and shorten the time required for social activities so that caregivers could concentrate on other care-related tasks.
The implementation process was divided into three stages: training staff, selecting suitable patients and assessing seniors who participated in the Robot Therapy programme using the Observed Emotion Rating Scale.
Singhealth asserts that the COVID-19 pandemic, which hastened the adoption of these solutions and accelerated the digital transformation of healthcare systems globally, has sparked a tremendous interest in digital technology and virtual health solutions.
A group of clinician innovators from SingHealth sought to ascertain whether digital interventions are more affordable and provide patients with greater value and benefits in anticipation of this continuing upward trend, and they discovered that this may not always be the case for some eye conditions.
The Indian Space Research Organisation’s (ISRO) Polar Satellite Launch Vehicle (PSLV) has launched nine satellites, including eight nanosatellites, into space from the first launch pad at the Satish Dhawan Space Centre in Andhra Pradesh.
The 44-metre-long rocket’s primary payload is the Earth Observation Satellite-6 (EOS-6) or Oceansat-3, a third-generation satellite to monitor oceans. It is a follow up to OceanSat-1 or IRS-P4 and OceanSat-2 launched in 1999 and 2009, respectively. Oceansat-3 will provide data about ocean colour, sea surface temperature, and wind vector data for oceanography, climatology, and meteorological applications.
The Oceansat-3 was placed in the polar orbit at a height of about 740 kilometres above sea level. While it weighs approximately 1,100 kilogrammes, which is only slightly heavier than Oceansat-1, for the first time in this series, it houses three ocean observing sensors. These include an Ocean Colour Monitor (OCM-3), Sea Surface Temperature Monitor (SSTM), and Ku-Band scatterometer (SCAT-3). There is also an ARGOS payload, a press release mentioned.
The OCM-3, with a high signal-to-noise ratio, is expected to improve accuracy in the daily monitoring of phytoplankton. This has a wide range of operational and research applications including fishery resource management, ocean carbon uptake, harmful algal bloom alerts, and climate studies. The SSTM will provide ocean surface temperature, which is a critical ocean parameter to provide various forecasts ranging from fish aggregation to cyclone genesis and movement. Temperature is a key parameter required to monitor the health of the coral reefs, and if needed, to provide coral bleaching alerts. The Ku-Band Pencil beam scatterometre will provide a high-resolution wind vector (speed and direction) at the ocean surface, which will be useful for seafarers, including fishermen and shipping companies. Data regarding temperature and wind is also particularly important for ocean and weather models to improve their forecast accuracies.
ARGOS is a communication payload jointly developed with France and it is used for low-power (energy-efficient) communications including marine robotic floats (Argo floats), fish-tags, drifters, and distress alert devices valuable in search and rescue operations.
The Minister of State (Independent Charge) for Science and Technology, Jitendra Singh, stated that ISRO will continue to maintain the orbit of the satellite and its standard procedures for data reception and archiving. Major operational users of this satellite include Ministry of Earth Sciences (MoEs) institutions such as the Indian National Centre for Ocean Information Services (INCOIS) and the National Centre for Medium Range Weather Forecasting (NCMRWF).
INCOIS has also established a state-of-the-art satellite data reception ground station within its campus with technical support from the National Remote Sensing Centre (ISRO-NRSC). Singh asserted that ocean observations such as this will serve as a solid foundation for India’s blue economy and polar region policies. A representative from MoES noted that the launch of Oceansat-3 is significant as it is the first major ocean satellite launch from India since the initiation of the UN Decade of Ocean Science for Sustainable Development (UNDOSSD, 2021-2030).
The Indian Space Research Organisation is the national space agency of India, headquartered in Bengaluru. It operates under the Department of Space, which is overseen by the country’s Prime Minister.
A Hong Kong Baptist University (HKBU) collaborative research team has synthesised a nanoparticle named TRZD that can perform the dual function of diagnosing and treating glioma in the brain. It emits persistent luminescence for the diagnostic imaging of glioma tissues in vivo and inhibits the growth of tumour cells by aiding the targeted delivery of chemotherapy drugs.
The nanoparticle offers hope for the early diagnosis and treatment of glioma, especially cerebellar glioma, which is even harder to detect and cure with existing methods. The research results have been published in Science Advances, an international scientific journal.
Limitations of existing diagnostic and therapeutic approaches
Glioma is the most common form of malignant primary brain tumour, accounting for roughly one-third of all brain tumours. While magnetic resonance imaging (MRI) is commonly used to diagnose glioma, the technology lacks sensitivity. Cerebellar glioma, a relatively rare brain tumour, is even harder to detect with MRI. To facilitate early detection and treatment, an alternative method with improved sensitivity and precision is needed to diagnose glioma.
A chemotherapy agent called Doxorubicin is an effective treatment for glioma. However, its application may also damage normal cells, and it is associated with a range of side effects. To enhance doxorubicin’s clinical efficacy and minimise its side effects, a novel approach is needed to apply the drug to tumour cells in a more targeted manner.
In response to the diagnostic and therapeutic needs of glioma, a research team co-led by Dr Wang Yi, Assistant Professor of the Department of Chemistry at HKBU, and Professor Law Ga-lai, Professor of the Department of Applied Biology and Chemical Technology at the Hong Kong Polytechnic University, has synthesised a novel near-infrared (NIR) persistent luminescence nanoparticle called TRZD, which can play a dual role in diagnostic imaging and as a drug carrier for glioma.
An imaging probe for glioma diagnosis
The research team evaluated the efficacy of TRZ (i.e., TRZD without doxorubicin) in diagnostic imaging for glioma with a mouse model. First, TRZ particles were excited by UV light to initiate luminescence. Mice with tumour tissues injected into their cerebrum and cerebellum were then treated with TRZ. Over the next 24 hours, TRZ luminescence was detected at the tumour sites of the mice.
However, when the same experiment was conducted with TRZ without T7 peptides, and TRZ without both the red blood cell membrane coating and T7 peptides, no luminescence was detected at the tumour sites of the mice. The results show that the red blood cell membrane coating can prolong the function of TRZ by stabilising the nanoparticle, and it can slow down its natural uptake by the human body.
The research team further evaluated the anti-tumour efficacy of TRZD using a group of mice who had had their cerebrum and cerebellum injected with tumour tissues.
After applying TRZD for 15 days, the average diameter of their tumours was reduced to 1 mm. They also survived 20 days longer on average compared to the control group, who had not received TRZD. Besides, cell death was observed in the tumour region but not in normal brain tissue.
The results indicate that TRZD’s therapeutic effect on glioma has good selectivity because doxorubicin is brought specifically to tumour cells due to T7 peptide’s strong affinity with tumour cells’ surface receptors and its ability to penetrate the blood-brain barrier. As a result, doxorubicin can be applied in a more targeted manner, and hopefully, its side effects can be minimised with reduced drug dosage.
The team concluded that the nanotechnology demonstrates promising potential, and it could be developed into a new generation of anti-glioma drugs that can perform the dual function of diagnosis and treatment. It also offers hope for the development of treatment protocols for other brain diseases.
The Vietnam Information Security Association (VNISA) surveyed 135 organisations and enterprises in Vietnam on ensuring information security. One out of every four organisations and businesses have had their systems interrupted or attacked in 2022, while 76% of organisations and businesses lack sufficient staff for information security.
The information was revealed by former Deputy Minister of the Ministry of Information and Communications (MIC), Nguyen Thanh Hung, who is chair of VNISA, during a plenary session at an international workshop during the Vietnam 2022 Information Security Day.
The survey found that 58% of organisations have doubts about technology and 47% about security holes. Around 68% of organisations and businesses said they still don’t have enough money to invest in information security annually. At the workshop, Tran Dang Khoa, the Deputy Head of the Authority of Information Security, said that in the last 11 months, the agency has recognised, warned, and instructed companies on how to handle 11,212 cyberattacks. The number of information systems in accordance with the new levels accounts for 54.8%. One of the key tasks of the agency in 2023 is submitting information to the Prime Minister for the issuance of a directive on legal compliance and security.
The workshop was sponsored by MIC and organised by VNISA and MIC and addressed “safe” digital transformation. MIC’s Deputy Minister, Nguyen Huy Dung, stated that ensuring safety in cyberspace is the task of all agencies, units, and people. Dung stressed that digital transformation is a national long-term programme. It means bringing people’s and businesses’ activities into a digital environment. It is necessary to protect more than 3,000 information systems of the state’s agencies, as well as activities in cyberspace of nearly one million businesses, five million business households, 26 million households, and 100 million people.
Dung noted that ensuring safe cyberspace and safety for organisations and people in cyberspace is the responsibility of all agencies, organisations, and people, with the principle ‘like cyberspace, like the real world’. The agencies in charge of certain fields in real life will also be in charge of those fields in the virtual environment, he said.
In October, Prime Minister Pham Minh Chinh issued Directive No. 18/CT-TTg on accelerating the implementation of activities to respond to cybersecurity incidents in Vietnam. The directive states that the government will pay more attention to reviewing, detecting, and fixing vulnerabilities and weaknesses. It will proactively monitor and detect any network information insecurity risks to promptly handle incidents. It will strictly implement regulations on reporting online information security incidents.
As OpenGov Asia reported, the directive describes cybersecurity as an important, cross-cutting pillar in the creation of digital trust. Its promotion will protect the country’s prosperous development in the digital era as the country attempts comprehensive national digital transformation. Chinh urged stakeholders to thoroughly grasp the contents of the Directive and devise measures to address and timely handle cybersecurity incidents. Stakeholders include ministers and heads of ministerial-level agencies, among others.
The Victoria University of Wellington’s division of Science, Health, Engineering, Architecture, and Design Innovation (SHEADI) will inaugurate a Centre of Data Science and Artificial Intelligence in the first half of 2023.
According to a statement from the University, the centre will offer areas of expertise in modelling and statistical learning; evolutionary and multi-objective learning; deep learning and transfer learning; image, text, signal, and language processing; scheduling and combinational optimisation; and interpretable AI/ML learning.
These technological themes will be applied across a wide range of areas including primary industry, climate change and environment; health, biology, medical outcomes; security, energy, high-value manufacturing; and social, public policy, and ethics applications. On top of traditional research, the centre will also establish a pipeline of scholarships/internships for Maori students, train early career researchers, and focus on industry, intellectual property, and commercialisation.
The centre will build on the current success and international leadership in this space at the University, the Pro Vice-Chancellor of the division, Ehsan Mesbahi, stated. The institute is continuing to grow its national and international partnerships to create local and global value. The centre will provide a distinctive identity for the growing excellence and innovation in data science and AI research at the University, capabilities which domestic and global partners are increasingly demanding across a vast array of application domains.
In May, the University announced it would offer the first undergraduate major in Artificial Intelligence in the country. It provides students with knowledge of AI concepts, techniques, and tools. They learn how to apply that knowledge to solve problems, combined with programming skills that will enable them to build software tools incorporating AI technology that will help shape the future.
Students studying AI at the University are taught by academics from its internationally renowned AI/ML research group, which is one of the largest in the southern hemisphere. The major is designed to open doors for graduates to opportunities nationally and around the world. There has been an increase in the adoption of AI technologies globally, and a growing demand for people who can apply AI techniques to address a wide range of problems, which the University aims to address.
After completing their degree, graduates will have a wide variety of career options, such as AI scientist, business consultant, AI architect, data analyst, machine learning engineer, and robotic scientist among others. They will also have the option to further their study through the University’s Master of Artificial Intelligence.
OpenGov Asia reported earlier that New Zealand’s Education Technology (EdTech) is set to become one of the country’s key industries. Worth NZ$ 173.6 million in 2020, EdTech software is poised to grow to NZ$ 319.6 million by 2025. At the heart of the digital transformation of education technology has been the pandemic. COVID-19 is seen as the driving force behind the digital transformation of learning, permanently changing the way education is consumed and delivered — right from preschool through post-tertiary education and lifelong learning. The global EdTech market size was valued at US$ 254.8 billion in 2021. Experts believe the market will reach US$ 605.4 billion by 2027.
The Deputy Premier and Minister for Regional NSW recently unveiled Our Vision for Regional Communities – a new strategy to ensure regional NSW remains an ideal best place to live, work, play and raise a family.
He noted that the release is a vision for the regional NSW we are building with local communities, backed by real action that will make a real difference in people’s everyday lives. Over the past decade, billions have been invested in the infrastructure NSW needs and in growing regional economies.
The vision shows how the Government plans to build on that foundation and ensure regional communities have access to the education and health services they deserve and attract the workforce needed to deliver these services. It will ensure families can find a home by tackling housing pressures and delivering the infrastructure and services they need in their local community, he added.
The strategy’s launch was also used to announce:
- A new welcome experience to be piloted across eight regional locations to support key workers to relocate to the regions and put down roots;
- An AU$5 million investment in scholarships to upskill existing health workers and attract new staff to regional communities;
- A trial of contactless payments on regional bus services in Dubbo and Bathurst to make services easier to use
Our Vision for Regional Communities is backed by a detailed three-year action plan that outlines key initiatives that will bring the vision to life. Initiatives already underway under the plan include:
- An AU$2.4 billion investment in strengthening the regional health workforce including innovative approaches to training and incentives;
- An AU$174 million investment in key worker housing that will deliver hundreds of new homes for teachers, police, and health workers over the next four years;
- An AU$98 million investment in a new AU$250 travel card for regional apprentices and university students to ease the cost of travel for training and classes;
- An AU$160 million investment in social and sporting infrastructure, and community programs like bike paths, playgrounds, and community centres through the Stronger Country Communities Fund;
- An AU$59 million investment in the next generation including $40 million for local initiatives shaped by youth for youth.
Our vision recognises that regional communities are diverse and need local solutions that work for them. Our Vision for Regional Communities and Action Plan 2023-2025 is a future-focused strategy with key priorities across healthcare, education, communities and places and regional homes.
Connectivity is the main pillar of the vision. Through the Vision, the Government will support high-quality physical and digital connectivity to enable access to quality services, delivered more efficiently, and with greater equity.
The global smart infrastructure market size was US$77.66 billion in 2020; it is projected to grow from US$97.20 billion in 2021 to US$434.16 billion in 2028 at a CAGR of 23.8% during the 2021-2028 period. As a result of the COVID-19 pandemic, the smart infrastructure market witnessed a negative demand shock across all regions.
Smart infrastructure projects require funding from public and private resources. These advanced infrastructure models use ICTs services to communicate or optimise resources. Due to constant interaction, big data plays a vital role in developing and building a smart infrastructure.
With the introduction of its Kooha Version 2.0 during the recently held 2022 National Science and Technology Week celebration, the Department of Science and Technology-Advanced Science and Technology Institute (DOST-ASTI) showered photo enthusiasts with helpful tips on interactive smartphone photography.
Kooha is a photo-sharing app derived from the Filipino word “kuha,” which means “to take.” It capitalises on the Philippines’ status as “the selfie capital of the world,” with thousands of photographs shared on various social media platforms every day.
With the help of the camera app Kooha, users may take pictures that go beyond simple snapshots. Multiple sensors are embedded into mobile devices; Kooha uses these sensor data while users snap pictures and embeds them in the image.
Users will be able to quickly learn the location where the photo was shot, the background noise when they shoot a selfie, the network provider’s signal strength, the device battery level, camera settings, environment sensor data, motion sensor, and more. All the photographs captured by the app are shared on Kooha Community. Users’ photos become more than just images when they post them to the community; they become contributions.
When the sensor data from the images is combined with the large pool of sensor data from other users, the data becomes societally important. The data can assist data scientists in generating insights and fresh knowledge that can be used by decision-makers across the country. Kooha is a free app that can be downloaded from Google Play.
According to the DOST-ASTI, Kooha uses the built-in sensors of a mobile device to gather real-time data like sound level, temperature, and humidity and embeds it into a snapshot, making it particularly valuable in research operations across industries thanks to the fresh knowledge it produces.
It added that even more useful Kooha features include the ability to contribute images to the community section, rate shared photos based on “awards” from other users, map the locations of pinned photos, and unlock “badges” by completing specific “achievements.”
As a useful tool application, Kooha reflects the reality that science and the arts may collaborate effectively to produce meaningful results. In addition, the DOST- ASTI’s Quality Management System (QMS) was recertified in accordance with the ISO 9001:2015 standard.
Director of DOST-ASTI Franz A. de Leon stated that the ISO recertification demonstrates the DOST-ASTI’s dedication to continuously enhance its operations and assure successful service delivery – bringing science and technology closer to the people.
He added that their partners and stakeholders can be confident that the institute will constantly offer high-quality products and services because they adhere to the quality policy of developing relevant, timely, and impactful ICT- and electronics-based innovations.
The ISO certificate was the result of the DOST-ASTI management and staff’s collaborative efforts to expand its technologies and ensure the smooth execution of its mandate and functions. Reviewing and improving processes is critical to achieving the agency’s purpose of contributing to the achievement of national development priorities and the growth of Philippine firms through the provision of creative solutions centred on ICT and electronics technology.
This is DOST-ASTI’s second recertification since transitioning to the ISO 9001:2015 standard in 2018. Subject to regular surveillance assessments, the certificate is valid until November 2025.