Researchers from the California Institute of Technology (Caltech) discovered that a deep-learning technology tag, known as Neural-Fly, could assist flying robots known as “drones” in adapting to any weather conditions.
Drones are now flown under controlled conditions, without wind, or by people using software or remote controls. The flying robots have been trained to take off in formation in the open air, although these flights are typically undertaken under perfect conditions.
However, for drones to autonomously perform important but mundane duties, such as package delivery or airlifting injured drivers from traffic accidents, they must be able to adapt to real-time wind conditions.
With this, a team of Caltech engineers has created Neural-Fly, a deep-learning technology that enables drones to adapt to new and unexpected wind conditions in real-time by merely adjusting a few essential parameters. Neural-Fly is discussed in newly published research titled “Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds” in Science Robotics.
The issue is that the direct and specific effect of various wind conditions on aircraft dynamics, performance, and stability cannot be accurately characterised as a simple mathematical model.
– Soon-Jo Chung, Bren Professor of Aerospace and Control and Dynamical Systems and Jet Propulsion Laboratory Research Scientist
Chung added that they employ a combined approach of deep learning and adaptive control that enables the aircraft to learn from past experiences and adapt to new conditions on the fly, with stability and robustness guarantees, as opposed to attempting to qualify and quantify each effect of the turbulent and unpredictable wind conditions they frequently encounter when flying.
Neural-Fly was evaluated at Caltech’s Center for Autonomous Systems and Technologies (CAST) utilising its Real Weather Wind Tunnel, a 10-foot-by-10-foot array of more than 1,200 tiny computer-controlled fans that enables engineers to mimic everything from a mild breeze to a gale.
Numerous models derived from fluid mechanics are available to researchers but getting the appropriate model quality and tweaking that model for each vehicle, wind condition, and operating mode is difficult.
Existing machine learning methods, on the other hand, demand massive amounts of data for training, but cannot match the flying performance attained by classical physics-based methods. Adapting a complete deep neural network in real-time is a monumental, if not impossible, undertaking.
According to the researchers, Neural-Fly addresses these challenges by utilising a technique known as separation, which requires only a few parameters of the neural network to be altered in real-time. This is accomplished using their innovative meta-learning technique, which pre-trains the neural network so that only these critical parameters need to be changed in order to successfully capture the changing environment.
After only 12 minutes of flying data, autonomous quadrotor drones outfitted with Neural-Fly learn how to respond to severe winds so well that their performance improves dramatically as judged by their ability to precisely follow a flight route.
When compared to drones equipped with current state-of-the-art adaptive control algorithms that identify and respond to aerodynamic effects but lack deep neural networks, the error rate following that flight path is between 2.5 to 4 times lower.
Landing may appear more difficult than flight, however, Neural-Fly can learn in real-time, unlike previous systems. As a result, it can react on the fly to wind variations and does not require post-processing.
In-flight tests were done outside of the CAST facility; Neural-Fly functioned just as well as it did in the wind tunnel. Additionally, the researchers showed that flight data collected by one drone can be transferred to another, establishing a knowledge pool for autonomous cars.
The drones were outfitted with a typical, off-the-shelf flight control computer utilised by the drone research and enthusiast communities. Neural-Fly was built into an onboard Raspberry Pi 4 computer, which is the size of a credit card and costs roughly $20.
The National University of Singapore’s Associate Professor Gary Tan employs technology to model and forecast human movement and then uses that information to optimise evacuation, reduce accidents, and ease traffic congestion during emergency situations. He is particularly interested in modelling how people would run or flee in such circumstances.
According to Associate Professor Gary, when people are in a panic, they act extremely differently and try to anticipate what would happen when, for example, they must evacuate an MRT station due to a bomb threat or a fire.
“In a crisis, each second is crucial. Effective evacuation and rescue plans are essential because delays might result in more fatalities,” says Associate Professor Gary.
Associate Professor Gary together and his students PhD candidates Wang Chengxin and Muhammad Shalihin bin Othman have created this special framework. It uses deep learning methods to track the real-life movement of pedestrians through video feeds. This behaviour is then converted into information that a virtual simulator can use to recreate situations and occurrences that would be too expensive or risky to actually recreate.
The project’s main goal was to create a disaster simulation. This data-driven approach makes it easier to build crowd management tactics that are more effective and delivers a more accurate prediction of human reactions in a crisis.
The framework interprets the movement patterns of pedestrians in real-world video feeds and converts them into data that can be used in a virtual simulator. The technology uses deep learning techniques to identify objects in specific video frames and accurately track them across the video feed.
They recreate settings and imitate actions that would be too expensive or risky to be carried out in real life. This enables the researchers to simulate various evacuation and rescue plans to determine the best course of action to take in an emergency.
The methodology is distinctive because, in contrast to earlier pedestrian simulation methods, it takes a data-driven approach and aims to investigate human behaviour directly from real-life footage. Since they are adapted from real video, this raises the level of realism.
The tracking algorithm that analyses how people move in the films underwent considerable improvement by the researchers. To extract realistic trajectories from real-world recordings, a good tracking algorithm is necessary. They can simulate realistic human movements using highly accurate trajectory data, which enables them to make more accurate predictions.
Following testing, it was discovered that “greater than expected” numbers of trajectories were successfully imported into the simulator from the movies.
Since releasing their research, the team has focused on developing further pedestrian monitoring systems. One, known as the Graph-based Temporal Convolutional Network (GraphTCN), uses artificial intelligence to track pedestrians’ temporal and geographical interactions with one another. The outcome is a behavioural model that can more faithfully simulate human movement.
The researchers are currently developing a new model that thinks more deeply. The Conscious Movement Model, or CMM, analyses CCTV footage and other real-world recordings to identify human behavioural patterns. These patterns are used to build a deep learning model that would subsequently affect the motions of a pedestrian in the simulation.
Researchers can increase the precision of prediction simulations by including genuine pedestrian movements. This will enable them to automatically run optimization algorithms and suggest the optimal course of action in various what-if scenarios. The research can be used to model the movement of both humans and autos in simulations of traffic congestion and accidents in addition to disaster situations.
A research team from the LKS Faculty of Medicine at The University of Hong Kong (HKUMed) has developed more efficient CRISPR-Cas9 variants that could be useful for gene therapy applications. By establishing a new pipeline methodology that implements machine learning on high-throughput screening to accurately predict the activity of protein variants, the team has expanded the capacity to analyse up to 20 times more variants at once without needing to acquire additional experimental data, which vastly accelerates the speed in protein engineering.
The pipeline has been successfully applied in several Cas9 optimisations and engineered new Staphylococcus aureus Cas9 (SaCas9) variants with enhanced gene editing efficiency. The findings are now published in Nature Communications and a patent application has been filed based on this work.
Staphylococcus aureus Cas9 (SaCas9) is an ideal candidate for in vivo gene therapy owing to its small size that allows packaging into adeno-associated viral vectors to be delivered into human cells for therapeutic applications. However, its gene-editing activity could be insufficient for some specific disease loci.
Before it can be used as a reliable tool for the treatment of human diseases, further optimisations of SaCas9 are vital within precision medicine. These optimisations must comprise the boosting of its efficiency and precision by altering the Cas9 protein.
The standard protocol for modifying the protein involves saturation mutagenesis, where the number of possible modifications that could be introduced to the protein far exceeds the experimental screening capacity of even the state-of-art high-throughput platforms by order of magnitude.
In their work, the team explored whether combining machine learning with structure-guided mutagenesis library screening could enable the virtual screening of many more modifications to accurately identify the rare and better-performing variants for further in-depth validations.
The machine learning framework was tested on several previously published mutagenesis screens on Cas9 variants and the team was able to show that machine learning could robustly identify the best performing variants by using merely 5-20% of the experimentally determined data.
The Cas9 protein contains several parts, including protospacer adjacent motif (PAM)-interacting (PI) and Wedge (WED) domains to facilitate its interaction with the target DNA duplex. The research team married the machine learning and high-throughput screening platforms to design activity-enhanced SaCas9 protein by combining mutations in its PI and WED domains surrounding the DNA duplex bearing a (PAM). PAM is crucial for Cas9 to edit the target DNA and the aim was to reduce the PAM constraint for wider genome targeting whilst securing the protein structure by reinforcing the interaction with the PAM-containing DNA duplex via the WED domain.
In the screen and subsequent validations, the researchers identified new variants, including one named KKH-SaCas9-plus, with enhanced activity by up to 33% at specific genomic loci. The subsequent protein modelling analysis revealed the new interactions created between the WED and PI domains at multiple locations within the PAM-containing DNA duplex, attributing to KKH-SaCas9-plus’s enhanced efficiency.
Until recently, structure-guided design has dominated the field of Cas9 engineering. However, it only explores a small number of sites, amino-acid residues, and combinations. In this study, the research team was able to illustrate that screening with a larger scale and less experimental efforts, time and cost can be conducted using the machine learning-coupled multi-domain combinatorial mutagenesis screening approach, which led them to identify a new high-efficiency variant KKH-SaCas9-plus.
The Assistant Professor of the School of Biomedical Sciences, HKUMed stated that this approach will greatly accelerate the optimisation of Cas9 proteins, which could allow genome editing to be applied in treating genetic diseases more efficiently.
To preserve and propagate the species in the typhoon-affected Cagayan Valley and to investigate bamboo’s potential for use in the pharmaceutical and industrial industries, phytochemical screening and DNA barcoding of economically significant bamboos will be conducted in the Philippines.
There are several benefits of using bamboo in the food, medicinal, phytochemical, medical, and industrial sectors, according to Alvin Jose L. Reyes and Eddie B. Abugan Jr from the Project Management Division (PMD) of the Department of Environment and Natural Resources (DENR)-Foreign Assisted and Special Projects.
They explained that seeds or living cells containing genetic resources beneficial for plant conservation and breeding are called germplasms. The DENR-PMD staff clarified that the classification of bamboo germplasm is an essential correlation between the preservation of diversity and utilisation of germplasm.
A study dubbed the Bamboo Characterisation Project of the Cagayan State University (CSU)-Gonzaga was recently presented to the DENR Protected Area Management Board (PAMB) in the province of Sta. Ana, Cagayan through its project leader Jeff M. Opeña. It has to do with its request for a free permit to carry out the bamboo characterisation and sample collecting tasks on the protected landscape and seascape of Palaui Island.
The CSU-Gonzaga research lab will also be renovated as part of the project. In the province of Cagayan, it will collect and classify various species in various environments. Furthermore, a contemporary and inventive method of classifying bamboo species will be DNA barcoding. It will speed up the process of experts identifying the species they want to utilise based on characteristics like quick reproduction or medicinal properties.
Bamboo has traditionally been classified according to how frequently or abundantly it flowers -annually, sporadically, or regularly, and gregariously. However, the demand for a long period of time, which might occur over years or even decades, made floral morphology description a limitation and a challenge.
On the other hand, professionals in pharmaceuticals and medicine can find plant secondary metabolites in bamboo that have application potential in the business through biochemical characterisation by phytochemical (plant chemistry) screening.
While secondary plant metabolites such as anthocyanins, alkaloids, flavonoids, saponins, phenols, steroids, tannins, and terpenoids are explored for medical plant herbal reasons, among other prospective commercial uses, primary metabolites comprise tiny molecules like amino acids and carbohydrates.
Additionally, Executive Order 879 required that 25% of the Department of Education’s annual supply of school desks be constructed of bamboo. Philippine Bamboo Industry Development Council (PBIDC) is created by Executive Order 879.
According to a direction sent to the DENR’s Forest Management Bureau, Laguna Lake Development Authority, and Mines and Geosciences Bureau, bamboo should be planted in the agency’s own reforestation zones.
In addition to reducing typhoon flooding, DENR wants to employ bamboo as a strategy for reducing climate change. Per hectare of a plantation, bamboo is known to absorb five metric tonnes of carbon dioxide. Bamboo is being planted in the Bicol and Marikina rivers, which are typically inundated during typhoons. Using engineered bamboo, DENR is also advocating its usage as a lumber replacement.
The first bamboo species studies to consider the various habitats where bamboo grows in the province of Cagayan are the phytochemical and morphological studies of bamboo species. The Smith Volcano, also known as Mount Babuyan, which is politically located in Calayan Island, and Mount Cagua in Gonzaga are the two volcanoes that the study of bamboo species growth will focus on.
Coastal locations, residential areas, grasslands, agroecosystems, next to water bodies, caverns, close the volcano, rainforests, islands, protected regions, and other habitats will be researched for the bamboo species using DNA barcoding.
The State Government is putting forward AU$ 1.2 million over four years to establish the State’s first Creative Technology Innovation Hub in Bunbury. This is aimed at boosting the regional creative enterprises of Western Australia.
The WA Creative Technology Innovation Hub (WACTIH) was announced by the region’s Innovation and ICT Minister in Bunbury and will operate in collaboration with the State Government, Edith Cowan University, the City of Bunbury and industry to stimulate and grow Western Australia’s emerging creative and immersive technology industry. The WACTIH aims to aid businesses and creative enterprises grow by linking research, entrepreneurship and education in the use of digital and immersive technologies.
As the South-West is home to over 320 creative and digital businesses, the WACTIH will help support hundreds more businesses across the State with specialised advice and services. The hub is being established to aid the growth of a future-ready workforce, entrepreneurs, start-ups and innovators in WA and its regions. The focus of the hub will be on creative digital industries including gaming, experiential and immersive technology, software development, product design, advertising, film and media.
Funded through the McGowan Government’s AU$ 16.7 million New Industries Fund, the WACTIH will become part of the State’s established innovation hubs in life sciences, data science and cyber security to build capability and capacity to diversify the economy, leverage new commercial opportunities and create jobs.
The Innovation and ICT Minister of WA stated the Creative Technology Innovation Hub establishment announcement will not only boost creative tech enterprises across the State but is a vote of confidence in regional innovators. The hub is expected to push economic value in the regions through business and skills transformation for increased, long-term advantage.
The Government of Western Australia’s New Industries Fund supports and accelerates WA’s innovative start-ups, emerging businesses and small and medium enterprises to diversify local and regional economies and create jobs and industries. Through the new hub, the McGowan Government aims to expand the State’s presence in the global digital supply chain of services, content and code.
The Bunbury MLA stated that Bunbury is the gateway to the State’s pristine South-West region and is now proudly home to hundreds of digital and creative businesses and innovators. The MLA added that he looks forward to the establishment of the WA Creative Technology Innovation Hub and the significant opportunities and benefits it will offer to the Bunbury community.
About the New Industries Fund
The New Industries Fund is a AU$ 16.7 million initiative to support and accelerate new and emerging businesses to create local jobs over 4 years. The Fund is a key component of the Western Australian Government’s Plan for Jobs and its approach to diversifying the economy as well as creating jobs. The WA innovation hubs bring a critical mass of people together, with access to expertise and facilities, making better use of talent and technology, and creating local jobs.
While innovation takes place across Western Australia, a designated hub provides focus and acts as a beacon to attract start-ups and small and medium-sized enterprises (SMEs). They also foster community pride by helping to promote academic excellence and industry strength.
Researchers at the Massachusetts Institute of Technology (MIT) demonstrated a robotic arm that searches for things hidden by radio frequency (RF) marks, which reflect signals from an antenna, using both optical information and RF waves.
They have now developed a new system that can efficiently retrieve any object buried in a pile based on their previous work. If some of the items in the pile have RFID tags, the system does not need to tag the target item to recover it.
The system’s algorithms, known as FuseBot, reason about the likely location and orientation of objects beneath the pile. FuseBot then determines the most efficient method for removing obstructing objects and extracting the target item. FuseBot was able to find more hidden items than a cutting-edge robotics system in half the time.
This speed could be particularly beneficial in an e-commerce warehouse. According to senior author Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the Media Lab, a robot tasked with processing returns could find items in an unsorted pile more efficiently using the FuseBot system.
More than 90% of U.S. stores already utilise RFID tags, according to recent market analysis, but since the technology is not widespread, only some items inside piles may be tagged. This issue served as the group’s research’s impetus.
With FuseBot, a robotic arm retrieves an untagged target object from a jumbled pile using an attached video camera and RF antenna. To produce a 3D model of the surroundings, the system scans the pile with its camera.
It simultaneously transmits signals to find RFID tags from its antenna. Since most solid objects can be penetrated by these radio waves, the robot can “see” deep inside the pile. FuseBot is aware that the target object cannot be found in the exact same location as an RFID tag because it is not marked.
Since the robot is aware of the target item’s size and shape, algorithms combine this data to update the 3D model of the surroundings and suggest suitable spots for it. The system then uses the pile of things and the positions of the RFID tags to decide which item should be removed to locate the target item quickly.
The robot doesn’t know how the components are arranged underneath the pile or how a flimsy object can be distorted by rubbing against bigger others. By leveraging its knowledge of an object’s size, shape, and the location of its RFID tag to create a model of the 3D space that object is most likely to occupy, it overcomes this difficulty via probabilistic reasoning.
It utilises logic to determine which thing would be “better” to eliminate next as it removes objects. After removing one item, the robot scans the pile once more and adjusts its plan considering the new knowledge.
Compared to the other robotic system’s 84 per cent success rate, FuseBot removed the target object successfully 95 per cent of the time. It was able to identify and collect the targeted goods more than twice as quickly and did it with 40% fewer moves.
The software that does the complicated reasoning for FuseBot may be built on any computer; it only needs to interact with a robotic arm that has a camera and an antenna.
FuseBot will soon feature more intricate models that the researchers hope will improve its performance with deformable things.
Advances in LED technology as a light source have made it possible to design lights based on how they look to the human eye. What was once just a theory can be proven true when a solid-state light source is present.
“To test lamps and luminaires, two tools called integrating-sphere and type C goniophotometer are used,” says Dr Revantino, ST, MT, Sub Coordinator of Electricity and Battery, Ministry of Industry of the Republic of Indonesia, and an alumnus of Bandung Institute of Technology (ITB).
Changing the strength of the individual colour chips in an LED light, which is called “tuning,” can make the spectrum of the light change. This gives users the freedom to choose the colour of the light based on their tastes, but it also changes how the colour of the object looks.
The Physical Engineering Vocational Body of the Indonesian Engineers Association (BKTF-PII) held a sharing session on the topic “Spectral-Based Lighting” to talk about the progress of LED technology.
If the spectral reflection properties of an object are known, the spectral interaction method can be used to simulate how the colour of an object will look under a certain LED light spectrum. This is useful in lighting situations where the colour of the object needs to be brought out so that it looks right. This is a step toward the Lighting 4.0 era, which is all about putting people at the centre of lighting.
Solid-state lighting is one type of lighting that is based on the colour of light. Solid-state lighting is a kind of lamp that gets its light from light diodes, organic light diodes, or polymer light diodes. This type of lamp is called a “solid lamp” because it doesn’t have gas or electric filaments like most lamps. This sturdy lamp has a lot going for it. From an economic point of view, flexibility can open the door to theoretical validation, and in the last ten years, it has grown very quickly.
To help with Industry 4.0, technology is also improving in the lighting industry. This is shown by the presence of spectral-based lighting, which is part of the Lighting 4.0 innovation – as an enabler, flexibility in light control, and human-centred lighting.
Data Engineering in Industry 4.0
“Data is the new oil” has become a popular saying in the world of technology. Data must be used to make decisions and make predictions about the future of a company or even an industry. In the world of technology, many sciences and jobs that deal with data are also growing quickly. From a Data Scientist to a Data Analyst to a Data Engineer.
Data Engineering is a subfield of software engineering that focuses on building data systems and infrastructure. Data engineering focuses on how data scientists can quickly and correctly get to the data they need, on the other hand, data engineers are usually in charge of building systems or infrastructure that deal with large amounts of data.
Data engineers have a role in the industrial world. They help solve problems that come up in the industrial world. In the process of digitalising an industry, there are steps that must be taken. Starting with system integration, visibility of operations, data analysis, and operational optimisation.
Several real-world solutions, from different stages of flow to digital parts that can shape the digitalisation of industry, have been created. Starting with operational intelligence, operator assistance, digital maintenance, energy management, condition monitoring, a training portal, quality management, and inventory management. These different solutions that have already been put in place can help users learn how to use Industry 4.0 empowerment technology effectively in Indonesia.
With the rapid advancement of global technology development, the Hong Kong Applied Science and Technology Research Institute (ASTRI) is engaging more enterprises in the cooperation and common development of “industry, academia and research”, ASTRI has, thus, launched the “IPs and Service Offerings for Technology Start-ups and SMEs”, selected 20 innovative technological companies from varying categories of entry services, including 8 hardware, 6 software and 6 consulting service companies, with the entry price of HK$50,000 to HK$150,000.
ASTRI focuses on transferring technology to the industry, transforming it into commodities, developing high-quality and affordable patents, information and communication technologies, and creating important and far-reaching influence. In cooperation with research institutions, enterprises and academia, ASTRI researches important technologies that the industry pays attention to, and assists enterprises to enhance their competitiveness.
The relevant scientific research projects selected have a wide range of content, mainly to solve company evaluation, technology and network security issues, writing, electronic technology and electricity issues and more. Private institutions in Hong Kong can contact relevant professionals and engineers at ASTRI for assistance and enquiries.
Hong Kong’s scientific research has undergone many years of development. However, many start-ups, and even small and medium-sized enterprises that have been rooted in Hong Kong for many years striving to improve the field of technology, have been paying high fees for the solutions to technical problems.
Until now, no platform provided cost-effective solutions for them, and their business needs were not understood. Thus, the support provided via the “IPs and Service Offerings for Technology Start-ups and SMEs” caters to the needs of enterprises and is expected to help the industry to solve their difficulties.
Since its establishment 22 years ago, ASTRI has provided different innovative technology software, hardware or technical support to various government departments, public organizations and many private enterprises in Hong Kong, contributing to the smooth enhancement or assistance in their development processes. With the industrialisation of technology and the intellectualization of industries, a new era of competition in Hong Kong will emerge.
Examples of selection options:
Cybersecurity awareness and benchmarking assessment
General cybersecurity awareness training for users of any skill level including general IT users and technical employees to management-level IT professionals aimed at reducing cyber risk at the human level. The training also includes general cybersecurity assessment and brief benchmarking covering web applications, mobile applications, networks, security architecture, cloud infrastructure to ensure SMEs have a comprehensive understanding of their cyber defence maturity.
ESG compliance analytics
Industry-specific (e.g., financial, energy) ESG benchmarking report that will list the average or distribution of listed companies in different ESG metrics as well as the top performers in each metric or category. It will be based on the SME’s ESG metrics; performance or status in the industry compared to peer companies will also be reported, along with improvement suggestions. Analytics will help SMEs generate reports automatically by filling in the minimum required information.
Mixed language speech recognition and audio indexing
Based on client-supplied audio records as training data, help train a preliminary mixed language model supporting Cantonese, English and Mandarin for applications in specific industry domains such as insurance, media, telecom, banking and/or KOL.
Other items include:
- Speech recognition & audio indexing
- Financial document analysis
- Smart OCR & document processing
- Behaviour and emotion analysis for driving safety
- Smart indoor and outdoor Geographic Information System
- IoT technologies and device communications
- Retired battery screening solutions
- Eco-friendly power system
- Safe energy storage solutions
- Analog IC design for medical devices
- 3D Integration power electronics modules
- ASTRI AR Glass
- Wearable technologies
- Gantry Free Electronic Road Pricing
- ESD protection design consultancy
- Digital document processing
- DC solutions for energy saving and protection