Users need to address these trends if they are to leverage them for organizational advantage by operating, competing, and innovating based on data.
The problem holding them back is change management; CIOs, CDOs, and architects know they need to move some of their data operations to the cloud but they fear the risks, cost, and danger of disrupting daily business. Users facing these changes need a primer on cloud data integration to get them started.
This report is that primer. It offers several recommendations about how to conceptualize cloud data integration, what capabilities to look for, and how to align cloud data integration efforts with business goals and requirements.
A representative of the country’s think tank, the National Institute of Transforming India (NITI Aayog), Ramesh Chand, formally introduced the Unified Portal for Agricultural Statistics (UPAg Portal). This marks a significant step in tackling the complex governance issues in India’s agricultural sector. It is designed to optimise and elevate data management within the agricultural sphere. It will contribute to a more efficient and responsive agricultural policy framework.
The portal standardises data related to prices, production, area, yield, and trade, consolidating it in a single location. This eliminates the necessity to compile data from multiple sources. The portal can also conduct advanced analytics, providing insights into production trends, trade correlations, and consumption patterns.
Furthermore, the portal will produce granular production estimates with increased frequency, improving the government’s capacity to respond swiftly to agricultural crises. Commodity profile reports will be generated using algorithms, reducing subjectivity and providing users with comprehensive insights. Users also have the flexibility to use the portal’s data for crafting their own reports, fostering a culture of data-driven decision-making.
The portal was developed by the Department of Agriculture and Farmers’ Welfare (DA&FW). During his speech, Chand hailed the platform as an investment and a monumental leap forward in the field of agricultural data management. He encouraged the audience to embrace a shift in mindset within agriculture, aimed at bringing about transformative changes. Research suggests that US$ 1 invested in data generated a US$ 32 impact, he said.
The portal empowers stakeholders with real-time, reliable, and standardised information, laying the foundation for more effective agricultural policies. He also asserted that when data is more objective, the room for subjective judgment in policy-making diminishes, resulting in more stable, transparent, and well-informed decisions. He advised that the portal should prioritise data credibility to maximise its effectiveness.
Secretary of the DA&FW, Manoj Ahuja, underscored the various ongoing initiatives by the department, such as the Krishi Decision Support System, the farmer registry, and crop surveys. He articulated that the UPAg Portal is envisioned as a public good, aiming to provide users with reduced search costs, minimised obstacles, and access to trustworthy, detailed, and impartial data. According to a press release, the UPAg portal tackles the following challenges:
Lack of Standardised Data: At present, agricultural data is scattered across multiple sources, often presented in diverse formats and units. The UPAg Portal’s objective is to centralise this data into a standardised format, making it easily accessible and understandable for users.
Lack of Verified Data: Reliable data is crucial for accurate policy decisions. UPAg Portal ensures that data from sources like Agmarknet is vetted and updated regularly, ensuring policymakers receive accurate information on agricultural prices.
Fragmented Data Sources: To construct a comprehensive understanding of any crop, it is necessary to consider multiple variables such as production, trade, and prices. The portal consolidates data from various sources, enabling a holistic assessment of agricultural commodities.
Inconsistent Frequency Variables: Data updates at different times, causing delays and inefficiencies. The portal offers real-time connectivity with data sources, reducing the time and effort required for monitoring and analysis.
The UPAg Portal is expected to play a pivotal role within the Digital Public Infrastructure for Agriculture, focusing on harnessing the diversity of the agriculture sector and leveraging data as a catalyst for growth.
The University of Wollongong (UOW) and the Indian Institute of Technology (IIT) Kanpur have joined forces in a pioneering effort funded by the Australia-India Cyber and Critical Technology Partnership. This grant, facilitated by the Department of Foreign Affairs and Trade (DFAT), is poised to advance the field of privacy in cloud computing, a critical domain in today’s rapidly evolving technological landscape.
Heading this ambitious project is Distinguished Professor Willy Susilo, an ARC Australian Laureate Fellow at UOW. His team, which includes experts such as Dr. Khoa Nguyen, Dr. Yannan Li (an ARC Discovery Early Career Researcher Award Fellow), and Dr Partha Sarathi Roy, will collaborate with Professor Manindra Agrawal from IIT Kanpur. Together, they are committed to exploring and developing practical cryptographic techniques that enhance privacy in cloud computing.
Cloud computing has become an indispensable component of contemporary life, permeating various aspects, from data storage and processing to document sharing and international collaboration. However, this ubiquity also introduces significant challenges when it comes to safeguarding sensitive information. While the art of digital cryptography offers essential tools for data protection in the cloud, conventional techniques may not be sufficiently equipped to address the unique privacy and security intricacies presented by modern cloud computing.
A key aspect of this initiative involves the standardisation of cryptographic techniques. Standardisation ensures that these methods are not only robust but also interoperable, capable of seamless integration across diverse cloud platforms, applications, and countries.
The primary objective of this project is to identify standardisation challenges within the realm of cloud computing privacy and security, both in Australia and India. Subsequently, these challenges will be met head-on with innovative privacy-enhancing cryptographic methodologies. The research team will evaluate the effectiveness of existing technologies, measuring their real-world impact, and validating whether they achieve the desired standardisation levels.
Professor Susilo underscores the strategic importance of this endeavour, stating that it will position UOW firmly within India through collaborative research efforts. UOW’s Institute of Cybersecurity and Cryptology has consistently been at the forefront of cybersecurity research in Australia. This project will leverage its strategic position and foster a robust partnership with its Indian counterpart, IIT Kanpur. Furthermore, it has the potential to strengthen Australia-India collaboration in the realm of cybersecurity standardisation, promising a future marked by enhanced digital security and privacy.
In addition to its academic partnerships, the team will leverage the support of industry giants. These corporate collaborations will provide invaluable resources and real-world insights to inform the development of privacy-enhancing cryptographic techniques. Simultaneously, the project will also engage with startups from UOW’s business incubator, iAccelarate and Indian companies, fostering a collaborative ecosystem that spans the spectrum of technological innovation.
Recognising the complexity and multifaceted nature of privacy concerns in cloud computing, the project will also tap into the expertise of the UOW School of Law. Legal scholars will provide consultation and insights, ensuring that the research aligns seamlessly with privacy and legal standards, further enhancing its practicality and real-world applicability.
The significance of this collaborative effort transcends borders and disciplines. By addressing the critical challenges posed by cloud computing security and privacy, it not only advances the field of cryptography but also contributes to the broader goals of cybersecurity and technological innovation. As our reliance on cloud computing continues to grow, ensuring the privacy and security of sensitive data is paramount. This project promises to deliver tangible solutions that will not only benefit Australia and India but also have a ripple effect, influencing global standards in the field of cloud computing privacy and security.
The partnership between the University of Wollongong and the Indian Institute of Technology Kanpur, supported by the Australia-India Cyber and Critical Technology Partnership grant, represents a significant step forward in the quest to enhance privacy in cloud computing.
With a diverse team of experts, strong industry collaborations, and a commitment to standardisation, this initiative holds the promise of advancing both the academic and practical aspects of cybersecurity, contributing to a safer and more secure digital future for all.
The effectiveness of GovTech’s Asynchronous Data Exchange (ADEX) in enabling the safe transmission and reception of real-time, lightweight data for both Whole-of-Government (WOG) and local businesses has been recognised on a worldwide scale. API Exchange (APEX) and Cloud File Transfer (CFT) are other important elements of the Communications Pillar of the Singapore Government Tech Stack (SGTS), which also includes ADEX.
The Sensor Data Exchange (SDX), which was later merged into the Smart Nation Sensor Platform (SNSP), was the original name of ADEX. It has since changed, though, to facilitate the sharing of both sensor and non-sensor data. In this context, processed sensor data, such as raw sensor data that has undergone processing and analysis, may be considered non-sensor data.
ADEX makes it easier to find and share event data over the Internet and Intranet, including real-time status updates and event streams. Publishers have the freedom to choose who receives these updates, while subscribers receive them immediately when the events are published.
Data up to the “restricted” classification level can be shared via the Government on Commercial Cloud (GCC)-deployed ADEX system without being considered sensitive. Through a self-service portal on the Internet and Intranet of the GCC, it enables government agencies to post and subscribe to real-time data.
The ADEX team has determined that the most frequent and pervasive security issues for an event streaming platform are “Insider Threats” and Distributed Denial-of-Service (DDoS) assaults. When a user improperly or unintentionally uses the platform, insider risks become a problem.
The team put multi-factor authentication and role-based policies in place to lessen this risk. To further guard against DDoS assaults disrupting the system, they have defined utilisation threshold limits and continually monitor them.
To guarantee that users have accurate control over the data stream access, user profiles must be maintained properly.
The platform uses Transport Layer Security (TLS) channels for communication and is compatible with widely used data protocols. To guarantee that users have accurate control over the data stream access, user profiles must be maintained properly.
The ADEX team highlights how important it is to examine the immediately accessible data, confirm the source of the data, and comprehend where and how the data is extracted in addition to the information, such as descriptions and tags, provided by the data source. It is crucial to employ validations to make sure the data is secure and to be watchful for problems when consuming the data.
As with any other sort of platform, it is crucial to include secure user passwords, multi-factor authentication, and reliable encryption solutions for data in transit and at rest. Users must routinely update their software and systems with the most recent security patches and updates to protect themselves from exploitable flaws. They should also follow accepted security procedures and implement as many security measures as their finances would allow.
The group looked at the differences between APEX and ADEX. They disclosed that APEX is an API platform created for centralised, secure government service access that largely uses HTTP REST to make this access possible.
Although ADEX is the government’s centralised event data exchange, it provides more freedom in how government institutions transmit real-time data. It makes it easier to find and subscribe to real-time data from various sources. Additionally, an approval procedure gives data access more precise control by guaranteeing that only approved people may view data.
The team gains from using analytics since they must examine usage numbers and patterns for messaging, which enables them to continuously enhance the system’s functionality and better serve users. They may make wise decisions and gradually improve ADEX by gathering knowledge of system performance and user needs.
The National Environment Agency (NEA) can exchange vital environmental information with stakeholders through ADEX, which is now functioning and serving the needs of numerous government entities. By making updated environmental data like rainfall, temperature, wind speed, and wind direction available to the public, NEA can increase stakeholder participation and information distribution.
It takes persistence and time to create and foster a collaborative culture within government organisations. Although agencies are experts in their fields, it’s possible that they are not aware of the ways in which their data might benefit other agencies.
The team is hopeful that agencies will be open to sharing their data on ADEX, enabling other agencies to independently scout the market and find new prospects. Public organisations may improve their overall operations and service delivery, make better judgements, and respond to emergencies more skillfully with real-time access to reliable data.
Artificial intelligence (AI) is transforming industries at an astounding rate in the era of large-scale model technology advancements. Data has become a key asset as the foundation for AI development. A lack of high-quality training data, poor data governance, and ineffective data supply and demand processes are some of the issues China’s AI business is facing. These difficulties limit the development and innovation of generative AI in the nation.
The China Artificial Intelligence Industry Development Alliance (AIIA) has taken a proactive move to solve the issue of AI data shortage by forming a “Data Committee.” Inauguration ceremonies for the AIIA Data Committee are scheduled for mid-October 2023.
As soon as it is established, the committee intends to work closely with several organisations, including the China Communications Standards Association Big Data Technology Standards Promotion Committee (CCSA TC601) and the Key Laboratory of Artificial Intelligence Key Technology and Application Evaluation of the Ministry of Industry and Information Technology. They hope to further industry research, standardisation, technology use, and related efforts collectively.
The group will serve as a catalyst for cooperation amongst many stakeholders, including data resource suppliers, data annotation specialists, and data consumers. Its main goal is to promote cooperation in the collection of AI data resources, demand analysis, data processing, and simplifying frictionless data exchanges between data suppliers and consumers. Through improved accessibility to essential data resources, this cooperative initiative seeks to strengthen the AI data set sector.
The committee will take the lead in creating key elements for data production, augmentation, maintenance, labelling, governance, and synthesis with a focus on cutting-edge technologies and tool platforms. Additionally, it will support the use of cutting-edge technologies like data synthesis, encouraging teamwork in research and promoting their practical applications.
The creation of a thorough AI data governance standard system is one of the committee’s crucial tasks. The complete lifetime of AI training data will be covered by this system, from data collection and annotation to quality control and open sharing. Enhancing training data quality ultimately aims to ensure authenticity, correctness, diversity, and traceability.
The group will concentrate its efforts on sectors that rely on data substantially, including finance, retail, manufacturing, and education. Here, it will examine cutting-edge AI data application scenarios and encourage original methods for utilising data for AI-driven developments in various fields.
The committee will explore the world of win-win business models while working closely with data owners, trainers, and processors. To produce policy recommendations suited to AI training data scenarios, it will chronicle industry best practices and collaborate with advances in the national data infrastructure.
The AIIA Data Committee cordially invites all team leader units, deputy leader units, and member units to participate in this revolutionary project. The initial group of beginning units have until September 30, 2023, to record their participation as pioneers in this programme.
An important turning point has been reached in China’s quest to realise the full potential of artificial intelligence with the establishment of the AIIA Data Committee. This project aims to accelerate AI development, boost innovation, and maintain China’s position as a worldwide leader in the area by addressing significant data-related difficulties and encouraging collaboration among industry players. As this committee sets off on its quest to influence the future of AI, keep an eye out for updates.
The use of data has become highly prevalent in various fields today. Data enables policymakers to make decisions objectively and efficiently. Similarly, in medicine, data simplifies the diagnosis of patients and the follow-up care based on patient history. Implementing data into healthcare will enhance personalised patient care to achieve satisfaction.
Khon Kaen University recognises its crucial role in academic research and academia. Recently, in collaboration with several departments, including the Faculty of Medicine, Faculty of Engineering, College of Computing, KKU Academy, KKU Library, Office of Teaching and Learning Innovation, and a network of Medical Faculties from Ramathibodi Hospital, Mahidol University, clinical research, and traded commercial bank in Thailand have embarked on organised a training regarding data comprehension which Professor Mengling’ Mornin’ Feng led.
According to Assoc Prof Sirapat Chiawchanwatana, PhD, who serves as the Dean of the College of Computing, Thailand’s healthcare system is considered a model for many nations. Thailand possesses valuable health data recorded within its healthcare service system and ongoing health research efforts. However, there are limitations regarding the data system’s comprehensiveness and utilisation to address healthcare system challenges. Therefore, this presents an opportunity to establish the Datathon Competition in Thailand.
“Khon Kaen University has previously worked together with Prof Mengling’ Mornin’ Feng. This provides a valuable opportunity to establish a network of experts in health data and AI within the community, fostering collaboration in the implementation of Health AI initiatives for the maximum benefit of the population,” expressed Assoc Prof Sirapat Chiawchanwatana.
The efforts undertaken by Khon Kaen University represent a noteworthy contribution towards advancing and transforming the healthcare community within the academic sphere, fostering greater data literacy in Thailand. Additionally, embracing data integration in healthcare offers numerous advantages, and one of the most prominent ones is the precision it brings to the diagnostic process.
For instance, recent research conducted by a collaborative team comprising the National Metal and Materials Technology Centre (MTEC), the National Science and Technology Development Agency (NSTDA), the Department of Radiology at the Faculty of Medicine, Ramathibodi Hospital, and an engineering and technical service company has unveiled significant advancements in breast cancer detection and prediction.
Their groundbreaking achievement includes the development of a breast simulation platform meticulously designed to enhance the proficiency of medical professionals in conducting ultrasound-guided breast biopsies. This platform offers an immersive learning experience, providing realistic data imaging and needle procedures. It aims to empower healthcare practitioners to deliver more precise diagnoses while reducing their reliance on imported training equipment by analysing the data.
Its design emphasises reusability by erasing needle marks after each session, promoting sustainability and cost-effective training for medical professionals. It ensures repeated practice opportunities and reduces the need for excessive resources.
Integrating real-time feedback and adjustable difficulty levels further tailors the learning experience, accommodating learners at various stages of expertise. By bridging the gap between theoretical knowledge and hands-on proficiency, the breast simulation platform plays a pivotal role in shaping confident and competent practitioners in the field of breast diagnostics.
In light of the advancement of data utilisation in healthcare, the convergence of innovative solutions with the expertise of healthcare professionals will usher in a new era of medical excellence. This cutting-edge technology and the knowledge and skills of healthcare practitioners promise to redefine patient care and healthcare practices.
By embracing data configuration, healthcare professionals can harness the power of data analytics and machine learning to enhance diagnostic precision and streamline patient care processes. The accessibility of medical information has become more efficient, enabling quicker and more informed decision-making.
A recent study conducted by researchers at the University of South Australia (UniSA) has unveiled a spectrum of metabolic biomarkers that hold promise in predicting cancer risk. Employing advanced machine learning techniques to analyse data from 459,169 participants enrolled in the UK Biobank, the research identified 84 distinct features that could potentially serve as indicators of heightened cancer susceptibility.
Several of these identified markers were also associated with chronic kidney or liver diseases, raising intriguing questions about potential links between these ailments and cancer. Led by a team of experts including Dr Iqbal Madakkatel, Dr Amanda Lumsden, Dr Anwar Mulugeta, and Professor Elina Hyppönen from the University of South Australia, along with the University of Adelaide’s Professor Ian Olver, this groundbreaking study, titled “Hypothesis-free Discovery of Novel Cancer Predictors Using Machine Learning,” delved deep into the data.
Dr Madakkatel, one of the lead researchers, explained the methodology stated that the team performed an exploratory analysis utilising artificial intelligence and statistical methods to pinpoint factors associated with cancer risk from a pool of over 2800 features.
The study’s outcomes were nothing short of remarkable, with over 40% of the features uncovered by the model proving to be biomarkers – biological molecules that can signify either sound health or underlying health issues, depending on their status. Significantly, some of these biomarkers exhibited dual associations, being linked not only to cancer risk but also to kidney or liver diseases.
Dr Lumsden elaborated on the implications of these findings, noting that the study offers valuable insights into potential mechanisms contributing to cancer risk. She stated that after age, the most significant indicator of cancer risk was identified as elevated levels of urinary microalbumin. Microalbumin, a vital serum protein essential for tissue repair and growth, takes on a dual role when detected in urine, serving as an indicator not only of kidney disease but also as a marker signalling an increased risk of cancer.
The study also identified other indicators of compromised kidney function, such as elevated blood levels of cystatin C, increased urinary creatinine (a waste product eliminated by the kidneys), and an overall reduction in total serum protein, all of which were linked to cancer risk.
Moreover, the research discovered a connection between an elevated red cell distribution width (RDW) – an indicator of the variation in the size of red blood cells – and an increased risk of cancer. Typically, red blood cells are relatively uniform in size, and deviations from this norm can signify higher inflammation and poorer renal function, as well as a heightened risk of cancer.
In addition to these findings, the study highlighted that elevated levels of C-reactive protein, a marker of systemic inflammation, were associated with an increased risk of cancer, along with high levels of the enzyme gamma glutamyl transferase (GGT), a biomarker indicative of liver stress.
Professor Elina Hyppönen, the chief investigator and the director of the Australian Centre for Precision Health at UniSA, emphasised the strength of this study, which lies in the power of machine learning. Professor Hyppönen noted that the model powered by artificial intelligence, has showcased its capability to assimilate and cross-reference a multitude of characteristics, thus uncovering pertinent risk factors that could otherwise remain hidden.
An intriguing aspect of this study was that, despite considering thousands of features, spanning clinical, behavioural, and social factors, a significant proportion were biomarkers reflecting the metabolic state before a cancer diagnosis. While the findings offer promise, Professor Hyppönen stressed the need for further research to confirm causality and clinical relevance.
The implications of this research are profound. With relatively simple blood tests, it might be possible to gain insights into one’s future risk of cancer. This potential early detection could enable proactive measures to be taken at a stage when cancer might still be preventable. The significance of these findings underscores the importance of ongoing research in the field of cancer risk prediction
Polymers, a macromolecule within materials science and engineering, play a pervasive role in daily life. They offer adaptability and are customisable to acquire specific and desirable properties, such as flexibility, water resistance, or electrical conductivity.
This versatility finds application in an array of products we encounter, from the nonstick cookware that simplifies our culinary adventures to the construction materials that shape our built environment, where polymers like polytetrafluoroethylene and polyvinyl chloride prominently feature.
However, polymers have their challenges, particularly when identifying the ideal combinations of materials to engineer the most effective and tailored polymers. The scope for potential material combinations is virtually boundless.
Fortunately, the landscape is evolving with the advent of cutting-edge technology. By adopting machine learning, this development has the potential to utterly transform how scientists and manufacturers navigate the sprawling chemical space, allowing them to pinpoint and craft these pivotal polymers with greater precision and efficiency than ever before.
Engineer Rampi Ramprasad conceived and directed the project. The primary goal of the new tool is to address the challenges associated with exploring the vast chemical space of polymers. PolyBERT, its name, has undergone extensive training using a comprehensive dataset containing 80 million polymer chemical structures. Consequently, it has become proficient in deciphering the intricate language of polymers.
Ramprasad stated, “This represents a pioneering application of language models in the field of polymer informatics. While natural language models are commonly employed to extract materials data from literature, our objective here is to apply such capabilities to comprehend the intricate grammar and syntax governing the assembly of atoms in polymer formation.”
PolyBERT approaches chemical structures and atomic connections as a specialised form of chemical communication, utilising cutting-edge techniques inspired by natural language processing to glean the most significant insights from these structures. Employing a Transformer architecture like that found in natural language models, it excels at capturing intricate patterns and relationships while mastering the grammar and syntax that govern atomic arrangements within polymer structures and beyond.
An impressive attribute of PolyBERT is its speed. Compared to traditional methodologies, PolyBERT outpaces them by more than two orders of magnitude in processing velocity. This rapid processing capacity positions PolyBERT as an invaluable tool within high-throughput polymer informatics pipelines. It enables swift and efficient screening of extensive polymer landscapes, offering researchers a powerful means to explore and analyse vast datasets quickly and effectively. This newfound speed and efficiency hold the potential to significantly accelerate advancements in the field of polymer research and development.
The researchers anticipate that the computation time required for polyBERT fingerprints will see further enhancements due to advancements in graphics processing unit (GPU) technology.
Debora Rodrigues, a programme director within NSF’s Directorate for Technology, Innovation, and Partnerships, explained that researchers are developing a novel artificial intelligence tool. This tool is designed to address the challenge of identifying the most effective polymer combinations, and it utilises artificial intelligence to achieve this goal. The device is trained on an extensive 80 million polymer chemical structures dataset. This approach enables the swift screening of diverse polymers without requiring labour-intensive laboratory experiments.