
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
Dr Jim Webber had just returned home from dropping his child at school when he found that a rear window had been smashed and that a startled burglar had fled back through it. Fortunately, a neighbour was able to jot down the thief’s number plate as he was speeding off and it was reported to the UK police.
The authorities are able to make use of a POLE (people-object-location-event) database in responding to such crimes. Dr Webber happens to be the chief scientist for the Graph Data Platform management company Neo4j used widely in government and law enforcement. “I believe the police response was partly guided by my software to track the criminals on the A3 road heading into London. They were able to apprehend the gang and return the stuff they had stolen from me and others.”
This is one of Dr Webber’s favourite stories to tell when asked about the functionality of graph data. It’s both personal and social.
The graph platform is essentially a collection of software systems that help people to understand data that is represented in a graph. “And by graph, we mean the mathematical thing, your edges and vertices if you remember that in college,” Jim, the Chief Scientist at Neo4j and visiting professor at Newcastle University, explains. “In short, it’s a system that stores and retrieves connected data fast. Very fast.”
Unlike other types of databases, Neo4j connects data as it’s stored, enabling queries never before imagined, at speeds never previously possible. Users are able to build data models that are circles, connected by arrows, and the arrows are assigned names, and each can produce very rich data models.
Elaborating Jim says, “We know in a relational model we store data in relations or rows. And if we’re being sophisticated, we can join rows across tables in some very clever ways.” But a Graph Data Platform is actually entity relationships that mirror how software people think of the world. In a graph, entities can be established that are related to each other in sometimes very rich and complex ways and sometimes very simple ways. Jim explained, “So we’re able to build these data models that are circles, connected by arrows, and the arrows have names, and we can produce really rich data models. So we can do all of these rich things because the data is smart. Data in traditional data structures are really ‘un-smart.’”
He said, for example, in a column-store, there is an implicit association of columns that is constraining. And a document-store contains all the data that kind of belongs together by virtue of being in the same document; but apart from that there is not much else.
The database is at the centre of the Neo4j platform. For example, it has systems for visualising and exploring graphs in human terms. It can run sophisticated algorithms to query who is the most popular person in the graph? While in the aggregate their database platform is probably no different than all the mature databases, the difference is that they process graphs rather than other data models.
THE CASE TO START THE GRAPH JOURNEY
Graph tech is still considered to be in the early stage of its evolution as many people have not used it or maybe have not even heard of it yet. As the category, founder Neo4j is early in the cycle. And although it feels late for Jim, as he has been plugging away at it for more than a decade, most people are just coming into this.
And Jim also has the perfect book for a graph neophyte. It is a book he authored titled “Graph Data Platforms for Dummies.” He said it is a very short book that, in one evening of humane reading, can explain everything one needs to know to understand the basics of graphs. The book is available for free on the Neo4j website.
Jim is quick to point out that the book should be seen as a snapshot. If there is a need to quickly understand this graph arena that seems to be a macro-trend in the industry, a CIO can quickly and easily browse through the book and understand the thinking and the use cases.
For anyone who may be sitting on the fence, Jim has a challenge, “What I ask of you is give me an afternoon of your time. Spend one afternoon working with Neo4j with a good part of your problem, with something meaty and useful, and if you can’t make progress in one afternoon, Neo4j and graphs are not for you. But if you can make progress in one afternoon, it will illuminate it for you. You’ll find out it’s not hard or mysterious. We designed this system to be explicitly humane, not arcane.”
There is a plethora of learning material available and a lot of information on government use cases, including Neo4j’s white papers. Yet, interestingly, graphs have frequently made their way into businesses and governments serendipitously. Keen engineers or big System Integrators (that the agencies often use), who are experienced with graphs, may have brought it to their assigned work and that’s how it made its way into the organisation.
The adoption of graphs will likely take more time, Jim conceded, especially among government agencies with well-established systems of record who are disinclined to immediate change. This reticence is understandable, as the public sector has systems of record that have been working relatively-well for them. So, it would make perfect sense for these agencies to be able to understand the associativity before commencing such migration or adoption.
Typical citizen information across diverse bands, such as tax, health and criminal records, are a great example. These cannot be clubbed together ad hoc into one big database – it just would not be workable. But what can be done is to put an “index” on top of all the information. Called a knowledge graph, this harvests all of the diverse information into a single organised graph. With it, a very sophisticated understanding of the needs of citizens can be created.
Another advantage of having a knowledge graph layered on top is that existing systems do not get disturbed. All sophisticated queries can be done at the knowledge graph level. If the physical row or the physical document from the underlying store is needed, it can be pulled up without disturbing or changing the original data.
In sum, the knowledge graph can collate information across sectors to address complicated queries. A government agency can then more easily conduct an evaluation: Do tax returns match the benefits being claimed?” Do a person’s criminal records allow them to be in a specific kind of job? Such graphs can also be deployed at a population level: What is the level of fraud in a tax system versus the benefits of the system?” Large scale, macro depictions are vital to intelligent, data-driven governance. This sort of information can be collected to help inform agency or government policy.
WHAT THEN IS HOLDING GRAPH BACK?
The biggest barrier to graph dominance is that there is already well-known technology out there. There is a lot of data technology, particularly in the relational world, that is very mature, that people know how to run and have been operating for many years. When agencies and CIOs already have a working solution, why would they take a chance on the graph?
The challenge for graphs in converting CIO’s is to make the impetus to adopt very compellingly. If the benefits are not tenfold or more, a CIO will rightly not even spend a few minutes on it. Further, other systems, while they may be incredibly expensive, essentially provide most government departments, another one or ten licenses basically for no charge because of existing agreements.
Intellectually a CIO could understand that graphs might be better here, but it involves learning new tech, bringing onboard a new supplier and negotiating another licensing agreement for substantial money. While the existing vendor may not be optimal, it can be sort of “shoehorned” in, and it’s free.
That’s not to say there aren’t opportunities for graphs to gain a foothold. When one takes equivalent queries from the relational world and ports them to graph, there is a paradigm-shifting minutes-to-millisecond experience. Something that might take several minutes in the relational world may only take several milliseconds in the graph world. For some people that is so compelling that they will make the jump.
For sectors like policing and health services, minutes are too long. If a fugitive is coming through an airport, waiting 30 minutes for a report to run can thwart a chance to detain them before they disappear into the general population. Similarly, in the medical world, information may be required to help make an immediate diagnosis. It can’t wait several minutes while the patient is in pain or is in a life-threatening condition.
Even things such as product recommendations are much better with graphs. For the digital-native client in today’s world looking to buy something on a web app or a phone app, they may wait for half a second and then they will move on.
Government agencies are challenged with complex, ever-evolving problems every day. While the answers exist somewhere in a vast amount of data, they are only identifiable with the right technology to make sense of the interconnectedness within the data. For proponents like Dr Jim Webber, that technology is a Graph Data Platform.

- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
A research team from the LKS Faculty of Medicine at The University of Hong Kong (HKUMed) has developed more efficient CRISPR-Cas9 variants that could be useful for gene therapy applications. By establishing a new pipeline methodology that implements machine learning on high-throughput screening to accurately predict the activity of protein variants, the team has expanded the capacity to analyse up to 20 times more variants at once without needing to acquire additional experimental data, which vastly accelerates the speed in protein engineering.
The pipeline has been successfully applied in several Cas9 optimisations and engineered new Staphylococcus aureus Cas9 (SaCas9) variants with enhanced gene editing efficiency. The findings are now published in Nature Communications and a patent application has been filed based on this work.
Staphylococcus aureus Cas9 (SaCas9) is an ideal candidate for in vivo gene therapy owing to its small size that allows packaging into adeno-associated viral vectors to be delivered into human cells for therapeutic applications. However, its gene-editing activity could be insufficient for some specific disease loci.
Before it can be used as a reliable tool for the treatment of human diseases, further optimisations of SaCas9 are vital within precision medicine. These optimisations must comprise the boosting of its efficiency and precision by altering the Cas9 protein.
The standard protocol for modifying the protein involves saturation mutagenesis, where the number of possible modifications that could be introduced to the protein far exceeds the experimental screening capacity of even the state-of-art high-throughput platforms by order of magnitude.
In their work, the team explored whether combining machine learning with structure-guided mutagenesis library screening could enable the virtual screening of many more modifications to accurately identify the rare and better-performing variants for further in-depth validations.
The machine learning framework was tested on several previously published mutagenesis screens on Cas9 variants and the team was able to show that machine learning could robustly identify the best performing variants by using merely 5-20% of the experimentally determined data.
The Cas9 protein contains several parts, including protospacer adjacent motif (PAM)-interacting (PI) and Wedge (WED) domains to facilitate its interaction with the target DNA duplex. The research team married the machine learning and high-throughput screening platforms to design activity-enhanced SaCas9 protein by combining mutations in its PI and WED domains surrounding the DNA duplex bearing a (PAM). PAM is crucial for Cas9 to edit the target DNA and the aim was to reduce the PAM constraint for wider genome targeting whilst securing the protein structure by reinforcing the interaction with the PAM-containing DNA duplex via the WED domain.
In the screen and subsequent validations, the researchers identified new variants, including one named KKH-SaCas9-plus, with enhanced activity by up to 33% at specific genomic loci. The subsequent protein modelling analysis revealed the new interactions created between the WED and PI domains at multiple locations within the PAM-containing DNA duplex, attributing to KKH-SaCas9-plus’s enhanced efficiency.
Until recently, structure-guided design has dominated the field of Cas9 engineering. However, it only explores a small number of sites, amino-acid residues, and combinations. In this study, the research team was able to illustrate that screening with a larger scale and less experimental efforts, time and cost can be conducted using the machine learning-coupled multi-domain combinatorial mutagenesis screening approach, which led them to identify a new high-efficiency variant KKH-SaCas9-plus.
The Assistant Professor of the School of Biomedical Sciences, HKUMed stated that this approach will greatly accelerate the optimisation of Cas9 proteins, which could allow genome editing to be applied in treating genetic diseases more efficiently.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
To preserve and propagate the species in the typhoon-affected Cagayan Valley and to investigate bamboo’s potential for use in the pharmaceutical and industrial industries, phytochemical screening and DNA barcoding of economically significant bamboos will be conducted in the Philippines.
There are several benefits of using bamboo in the food, medicinal, phytochemical, medical, and industrial sectors, according to Alvin Jose L. Reyes and Eddie B. Abugan Jr from the Project Management Division (PMD) of the Department of Environment and Natural Resources (DENR)-Foreign Assisted and Special Projects.
They explained that seeds or living cells containing genetic resources beneficial for plant conservation and breeding are called germplasms. The DENR-PMD staff clarified that the classification of bamboo germplasm is an essential correlation between the preservation of diversity and utilisation of germplasm.
A study dubbed the Bamboo Characterisation Project of the Cagayan State University (CSU)-Gonzaga was recently presented to the DENR Protected Area Management Board (PAMB) in the province of Sta. Ana, Cagayan through its project leader Jeff M. Opeña. It has to do with its request for a free permit to carry out the bamboo characterisation and sample collecting tasks on the protected landscape and seascape of Palaui Island.
The CSU-Gonzaga research lab will also be renovated as part of the project. In the province of Cagayan, it will collect and classify various species in various environments. Furthermore, a contemporary and inventive method of classifying bamboo species will be DNA barcoding. It will speed up the process of experts identifying the species they want to utilise based on characteristics like quick reproduction or medicinal properties.
Bamboo has traditionally been classified according to how frequently or abundantly it flowers -annually, sporadically, or regularly, and gregariously. However, the demand for a long period of time, which might occur over years or even decades, made floral morphology description a limitation and a challenge.
On the other hand, professionals in pharmaceuticals and medicine can find plant secondary metabolites in bamboo that have application potential in the business through biochemical characterisation by phytochemical (plant chemistry) screening.
While secondary plant metabolites such as anthocyanins, alkaloids, flavonoids, saponins, phenols, steroids, tannins, and terpenoids are explored for medical plant herbal reasons, among other prospective commercial uses, primary metabolites comprise tiny molecules like amino acids and carbohydrates.
Additionally, Executive Order 879 required that 25% of the Department of Education’s annual supply of school desks be constructed of bamboo. Philippine Bamboo Industry Development Council (PBIDC) is created by Executive Order 879.
According to a direction sent to the DENR’s Forest Management Bureau, Laguna Lake Development Authority, and Mines and Geosciences Bureau, bamboo should be planted in the agency’s own reforestation zones.
In addition to reducing typhoon flooding, DENR wants to employ bamboo as a strategy for reducing climate change. Per hectare of a plantation, bamboo is known to absorb five metric tonnes of carbon dioxide. Bamboo is being planted in the Bicol and Marikina rivers, which are typically inundated during typhoons. Using engineered bamboo, DENR is also advocating its usage as a lumber replacement.
The first bamboo species studies to consider the various habitats where bamboo grows in the province of Cagayan are the phytochemical and morphological studies of bamboo species. The Smith Volcano, also known as Mount Babuyan, which is politically located in Calayan Island, and Mount Cagua in Gonzaga are the two volcanoes that the study of bamboo species growth will focus on.
Coastal locations, residential areas, grasslands, agroecosystems, next to water bodies, caverns, close the volcano, rainforests, islands, protected regions, and other habitats will be researched for the bamboo species using DNA barcoding.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
The State Government is putting forward AU$ 1.2 million over four years to establish the State’s first Creative Technology Innovation Hub in Bunbury. This is aimed at boosting the regional creative enterprises of Western Australia.
The WA Creative Technology Innovation Hub (WACTIH) was announced by the region’s Innovation and ICT Minister in Bunbury and will operate in collaboration with the State Government, Edith Cowan University, the City of Bunbury and industry to stimulate and grow Western Australia’s emerging creative and immersive technology industry. The WACTIH aims to aid businesses and creative enterprises grow by linking research, entrepreneurship and education in the use of digital and immersive technologies.
As the South-West is home to over 320 creative and digital businesses, the WACTIH will help support hundreds more businesses across the State with specialised advice and services. The hub is being established to aid the growth of a future-ready workforce, entrepreneurs, start-ups and innovators in WA and its regions. The focus of the hub will be on creative digital industries including gaming, experiential and immersive technology, software development, product design, advertising, film and media.
Funded through the McGowan Government’s AU$ 16.7 million New Industries Fund, the WACTIH will become part of the State’s established innovation hubs in life sciences, data science and cyber security to build capability and capacity to diversify the economy, leverage new commercial opportunities and create jobs.
The Innovation and ICT Minister of WA stated the Creative Technology Innovation Hub establishment announcement will not only boost creative tech enterprises across the State but is a vote of confidence in regional innovators. The hub is expected to push economic value in the regions through business and skills transformation for increased, long-term advantage.
The Government of Western Australia’s New Industries Fund supports and accelerates WA’s innovative start-ups, emerging businesses and small and medium enterprises to diversify local and regional economies and create jobs and industries. Through the new hub, the McGowan Government aims to expand the State’s presence in the global digital supply chain of services, content and code.
The Bunbury MLA stated that Bunbury is the gateway to the State’s pristine South-West region and is now proudly home to hundreds of digital and creative businesses and innovators. The MLA added that he looks forward to the establishment of the WA Creative Technology Innovation Hub and the significant opportunities and benefits it will offer to the Bunbury community.
About the New Industries Fund
The New Industries Fund is a AU$ 16.7 million initiative to support and accelerate new and emerging businesses to create local jobs over 4 years. The Fund is a key component of the Western Australian Government’s Plan for Jobs and its approach to diversifying the economy as well as creating jobs. The WA innovation hubs bring a critical mass of people together, with access to expertise and facilities, making better use of talent and technology, and creating local jobs.
While innovation takes place across Western Australia, a designated hub provides focus and acts as a beacon to attract start-ups and small and medium-sized enterprises (SMEs). They also foster community pride by helping to promote academic excellence and industry strength.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
Researchers at the Massachusetts Institute of Technology (MIT) demonstrated a robotic arm that searches for things hidden by radio frequency (RF) marks, which reflect signals from an antenna, using both optical information and RF waves.
They have now developed a new system that can efficiently retrieve any object buried in a pile based on their previous work. If some of the items in the pile have RFID tags, the system does not need to tag the target item to recover it.
The system’s algorithms, known as FuseBot, reason about the likely location and orientation of objects beneath the pile. FuseBot then determines the most efficient method for removing obstructing objects and extracting the target item. FuseBot was able to find more hidden items than a cutting-edge robotics system in half the time.
This speed could be particularly beneficial in an e-commerce warehouse. According to senior author Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the Media Lab, a robot tasked with processing returns could find items in an unsorted pile more efficiently using the FuseBot system.
More than 90% of U.S. stores already utilise RFID tags, according to recent market analysis, but since the technology is not widespread, only some items inside piles may be tagged. This issue served as the group’s research’s impetus.
With FuseBot, a robotic arm retrieves an untagged target object from a jumbled pile using an attached video camera and RF antenna. To produce a 3D model of the surroundings, the system scans the pile with its camera.
It simultaneously transmits signals to find RFID tags from its antenna. Since most solid objects can be penetrated by these radio waves, the robot can “see” deep inside the pile. FuseBot is aware that the target object cannot be found in the exact same location as an RFID tag because it is not marked.
Since the robot is aware of the target item’s size and shape, algorithms combine this data to update the 3D model of the surroundings and suggest suitable spots for it. The system then uses the pile of things and the positions of the RFID tags to decide which item should be removed to locate the target item quickly.
The robot doesn’t know how the components are arranged underneath the pile or how a flimsy object can be distorted by rubbing against bigger others. By leveraging its knowledge of an object’s size, shape, and the location of its RFID tag to create a model of the 3D space that object is most likely to occupy, it overcomes this difficulty via probabilistic reasoning.
It utilises logic to determine which thing would be “better” to eliminate next as it removes objects. After removing one item, the robot scans the pile once more and adjusts its plan considering the new knowledge.
Compared to the other robotic system’s 84 per cent success rate, FuseBot removed the target object successfully 95 per cent of the time. It was able to identify and collect the targeted goods more than twice as quickly and did it with 40% fewer moves.
The software that does the complicated reasoning for FuseBot may be built on any computer; it only needs to interact with a robotic arm that has a camera and an antenna.
FuseBot will soon feature more intricate models that the researchers hope will improve its performance with deformable things.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
Advances in LED technology as a light source have made it possible to design lights based on how they look to the human eye. What was once just a theory can be proven true when a solid-state light source is present.
“To test lamps and luminaires, two tools called integrating-sphere and type C goniophotometer are used,” says Dr Revantino, ST, MT, Sub Coordinator of Electricity and Battery, Ministry of Industry of the Republic of Indonesia, and an alumnus of Bandung Institute of Technology (ITB).
Changing the strength of the individual colour chips in an LED light, which is called “tuning,” can make the spectrum of the light change. This gives users the freedom to choose the colour of the light based on their tastes, but it also changes how the colour of the object looks.
The Physical Engineering Vocational Body of the Indonesian Engineers Association (BKTF-PII) held a sharing session on the topic “Spectral-Based Lighting” to talk about the progress of LED technology.
If the spectral reflection properties of an object are known, the spectral interaction method can be used to simulate how the colour of an object will look under a certain LED light spectrum. This is useful in lighting situations where the colour of the object needs to be brought out so that it looks right. This is a step toward the Lighting 4.0 era, which is all about putting people at the centre of lighting.
Solid-state lighting is one type of lighting that is based on the colour of light. Solid-state lighting is a kind of lamp that gets its light from light diodes, organic light diodes, or polymer light diodes. This type of lamp is called a “solid lamp” because it doesn’t have gas or electric filaments like most lamps. This sturdy lamp has a lot going for it. From an economic point of view, flexibility can open the door to theoretical validation, and in the last ten years, it has grown very quickly.
To help with Industry 4.0, technology is also improving in the lighting industry. This is shown by the presence of spectral-based lighting, which is part of the Lighting 4.0 innovation – as an enabler, flexibility in light control, and human-centred lighting.
Data Engineering in Industry 4.0
“Data is the new oil” has become a popular saying in the world of technology. Data must be used to make decisions and make predictions about the future of a company or even an industry. In the world of technology, many sciences and jobs that deal with data are also growing quickly. From a Data Scientist to a Data Analyst to a Data Engineer.
Data Engineering is a subfield of software engineering that focuses on building data systems and infrastructure. Data engineering focuses on how data scientists can quickly and correctly get to the data they need, on the other hand, data engineers are usually in charge of building systems or infrastructure that deal with large amounts of data.
Data engineers have a role in the industrial world. They help solve problems that come up in the industrial world. In the process of digitalising an industry, there are steps that must be taken. Starting with system integration, visibility of operations, data analysis, and operational optimisation.
Several real-world solutions, from different stages of flow to digital parts that can shape the digitalisation of industry, have been created. Starting with operational intelligence, operator assistance, digital maintenance, energy management, condition monitoring, a training portal, quality management, and inventory management. These different solutions that have already been put in place can help users learn how to use Industry 4.0 empowerment technology effectively in Indonesia.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
With the rapid advancement of global technology development, the Hong Kong Applied Science and Technology Research Institute (ASTRI) is engaging more enterprises in the cooperation and common development of “industry, academia and research”, ASTRI has, thus, launched the “IPs and Service Offerings for Technology Start-ups and SMEs”, selected 20 innovative technological companies from varying categories of entry services, including 8 hardware, 6 software and 6 consulting service companies, with the entry price of HK$50,000 to HK$150,000.
ASTRI focuses on transferring technology to the industry, transforming it into commodities, developing high-quality and affordable patents, information and communication technologies, and creating important and far-reaching influence. In cooperation with research institutions, enterprises and academia, ASTRI researches important technologies that the industry pays attention to, and assists enterprises to enhance their competitiveness.
The relevant scientific research projects selected have a wide range of content, mainly to solve company evaluation, technology and network security issues, writing, electronic technology and electricity issues and more. Private institutions in Hong Kong can contact relevant professionals and engineers at ASTRI for assistance and enquiries.
Hong Kong’s scientific research has undergone many years of development. However, many start-ups, and even small and medium-sized enterprises that have been rooted in Hong Kong for many years striving to improve the field of technology, have been paying high fees for the solutions to technical problems.
Until now, no platform provided cost-effective solutions for them, and their business needs were not understood. Thus, the support provided via the “IPs and Service Offerings for Technology Start-ups and SMEs” caters to the needs of enterprises and is expected to help the industry to solve their difficulties.
Since its establishment 22 years ago, ASTRI has provided different innovative technology software, hardware or technical support to various government departments, public organizations and many private enterprises in Hong Kong, contributing to the smooth enhancement or assistance in their development processes. With the industrialisation of technology and the intellectualization of industries, a new era of competition in Hong Kong will emerge.
Examples of selection options:
Cybersecurity awareness and benchmarking assessment
General cybersecurity awareness training for users of any skill level including general IT users and technical employees to management-level IT professionals aimed at reducing cyber risk at the human level. The training also includes general cybersecurity assessment and brief benchmarking covering web applications, mobile applications, networks, security architecture, cloud infrastructure to ensure SMEs have a comprehensive understanding of their cyber defence maturity.
ESG compliance analytics
Industry-specific (e.g., financial, energy) ESG benchmarking report that will list the average or distribution of listed companies in different ESG metrics as well as the top performers in each metric or category. It will be based on the SME’s ESG metrics; performance or status in the industry compared to peer companies will also be reported, along with improvement suggestions. Analytics will help SMEs generate reports automatically by filling in the minimum required information.
Mixed language speech recognition and audio indexing
Based on client-supplied audio records as training data, help train a preliminary mixed language model supporting Cantonese, English and Mandarin for applications in specific industry domains such as insurance, media, telecom, banking and/or KOL.
Other items include:
- Speech recognition & audio indexing
- Financial document analysis
- Smart OCR & document processing
- Behaviour and emotion analysis for driving safety
- Smart indoor and outdoor Geographic Information System
- IoT technologies and device communications
- Retired battery screening solutions
- Eco-friendly power system
- Safe energy storage solutions
- Analog IC design for medical devices
- 3D Integration power electronics modules
- ASTRI AR Glass
- Wearable technologies
- Gantry Free Electronic Road Pricing
- ESD protection design consultancy
- Digital document processing
- DC solutions for energy saving and protection
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
Researchers have developed a reusable, recyclable, washable, odourless, non-allergic, and anti-microbial N95 mask by using 3D printing technology. The multi-layer mask has a shelf life of more than 5 years, depending upon the use. The outer layer is made up of silicon.
Apart from its well-known uses to prevent infections like COVID-19, the mask can also be used in industries where workers are exposed to high volumes of dust like cement or cotton factories, brick kilns, and paint industries. It can be modified according to the requirement by changing the filter configuration. As per a government press release, the mask can help prevent severe lung diseases such as silicosis. A trademark and a patent have also been filed for the mask called Nano Breath.
The mask consists of a 4-layer filtration mechanism wherein the outer and first layer of the filter is coated with nanoparticles. The second layer is a high-efficiency particulate absorbing (HEPA) filter, the third layer is a 100 µm filter, and the fourth layer is a moisture absorbent filter.
A Zetasizer Nano ZS, a facility supported by the government’s Fund for Improvement of Science & Technology Infrastructure (FIST) project, was used to carry out the work. It enables high-temperature thermal analysis for ceramic materials and catalysis applications. It is a high-performance, versatile system for measuring particle size, zeta potential, molecular weight, particle mobility, and micro-rheology.

Technology has played a significant role in the fight against the COVID-19 pandemic over the past two years. Indian institutes have invested resources in developing tech-enabled solutions for the new normal. Earlier this year, researchers from the Indian Institute of Technology in Jodhpur (IIT-Jodhpur) created an artificial intelligence (AI) model that can detect COVID-19 by examining the chest X-ray of patients. The team proposed a deep learning-based algorithm called COMiT-Net, which learns the abnormalities present in the chest X-ray images to differentiate between an affected lung and a non-affected lung. It can also identify infected regions of the lungs.
In March, Bengaluru-based scientists from the Centre for Nano and Soft Matter Sciences (CeNS) and the Jawaharlal Nehru Centre for Advanced and Scientific Research (JNCASR) developed an affordable solution to develop low-cost touch-cum-proximity sensors, popularly called touchless touch sensors, through a printing technique. The scientists set up a semi-automated production plant to produce printing-aided patterned transparent electrodes (a resolution of around 300 micrometres). It has the potential to be utilised in advanced touchless screen technologies. It could be used for self-service kiosks, ATMs, and vending machines.
As OpenGov Asia reported, the team fabricated a touch sensor that can sense a proximal or hover touch even from a distance of 9 centimetres from the device. The team also announced they would make several more prototypes using their patterned electrodes to prove their feasibility for other smart electronic applications. Industry players and research institutions and labs can access the technology on a request basis and through collaborative projects. The patterned transparent electrodes could be used in advanced smart electronic devices like touchless screens and sensors.