
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
Projects keep failing, so what’s the
problem?
Projects
are about delivering an outcome that fixes a business need. Others suggest
projects are to take advantage of an opportunity. Those opportunities usually
are to fix a perceived problem. Those perceptions to fix that future problems still
needs a project to implement them and solve it. Those projects still need to be
justified.
Projects
fail for many reasons, building on shaky foundations will usually end in failure.
That foundation is a well-defined and explained problem. Problems need to be
clearly identified, stating how the proposed solution will fix it, and showing
a value proposition to the organisation, customer or both. A quantifiable and
demonstratable benefit, as most businesses are there to make money or provide
better services, why would you be doing it?
Projects
need to argue the logic for investing both time and money, more importantly
what the pay back will be. Business is about making money, providing a service,
or both. Funds are usually limited, rarely having a lack of opportunity to be
spent on the many challenges facing organisations.
What is the Business Case for this
project?
At
first, an organisation wants to understand the why of a project. Closely after
that is the how will it be achieved, and how much. But there needs to be a
compelling reason for carrying out the project. Often projects put the cart
before the horse. In other words, they have a new or updated product that will
future proof their organisation, addressing many of the perceived issues that
could be addressed by the many of the new features on offer all presenting
sound arguments. In a world of unlimited resources and funds that would not be
a problem, but that is not the case. Money and resources are a factor of every
business and they are not always limitless.
Projects
that fail are usually proposed with all the good intents, the arguments of the
new features all sound good. The biggest issue is that no one hears the same
benefits. This results in different stakeholders with different expectations.
As the projects progress it becomes a feature fest. More is better, right? But
time elapses, costs increase, expectations having been ill defined results in
no one being happy. Time and money start to run out, results are not achieved
and the project grinds to a halt. Does this sound familiar?
Questions are raised
Why
are these projects failing? What went wrong? We had all the governance in place
and it seemed to be working fine, then it all went south. I just don’t understand
what happened?
The
problem is there was no real problem being addressed, or that problem was
perceived and not correctly identified. Problems form the foundations of a business
case, needing to be clearly identified, quantified and expressed in a manner
that all parties can agree. Business cases need to clearly explain the intended
problem to be addressed. Identifying problems and explaining the consequences if
they were not addressed. Then describing the benefits that would result in
fixing them, more importantly how that would be proved.
When
defining the problems and proposed benefit, there needs to be an understanding of
the following:
·
Why invest? – Describe how this investment will benefit the
organisation
·
The Rational – What is the logic for this investment, how
will it be tested and proved that it has delivered the expected results
·
Who feels that this is a problem – gather views from all
appropriate stakeholders within the organisation through discussion with the
subject matter experts.
Defining the problem
Often
what is perceived as a problem is usually not the problem. To find the real
problem you will need to carry out “root cause analysis”. A good example of
this is a technique commonly referred to as the “5 Whys”. This is an iterative
interrogative technique used to explore the cause and affect relationships
underlying a problem.
What sort of questions should you be
asking?
A
famous quote of Einstein was:
“If
I had an hour to solve a problem and my life depended on the solution, I would
spend the first 55 minutes determining the proper question to ask…
for once I know the proper question, I could solve the problem in less than
five minutes.”
What
stakeholder say is a problem does not necessarily reflect the root cause that
created the problem. Every stakeholder potentially will have different issues
which they consider a problem. Your job is to dig, finding the root cause. You
need to identify both the cause and the consequence to any issue raised as
potential problems. A simple test is called the “so what?”, Similar to the “5
Whys”, which will be covered a little later.
Is
there any evidence that confirms the cause and effect of the identified
problem? What is the priority? Does it need to be addressed now or could it
wait? Is the issue specific to what you are looking at, or should that
perspective be broader?
Example of 5 Whys
“..
the finance director could not understand why his maintenance costs were
increasing on the factory floor. He had sent a directive to the department to
cut costs. He decided to venture down to the factory floor to speak with the
manager and better understand these increases. (His perceived problem)
..
as the finance director was walking through the factory he noticed a pool of
water on the floor. He called a maintenance staff member to inquire about the
water.
·
Why is there a pool of water here on the floor? The staff
member pointed out that one of the pipes above was faulty and leaking. (Maintenance
perceived problem) The director then asked for the manager,
·
Why was that pipe leaking? The manager pointed out the
replacement washer had not sealed properly. Again, the director then asked,
·
Why did the washer not seal properly? The manager suggested the
washer had possibly failed. The director then asked,
·
Why did it fail. The manager then suggested the washers were
cheap and that they had a tendency not to last too long. Again, the director
asked,
·
Why were we using cheap washers? I was following the budget directive
to cut my maintenance costs. We then sourced alternatives as our previous
washers were too expensive.
The
director had found the root cause. In this case there were several perceived
problems. The director had a problem with his costs of maintenance, the staff
member had a faulty pipe and the manager had issues with cheap washers. At
first replacing the pipe potentially could have fixed the problem. But as it
was not the root cause it would have resulted in an expensive fix and the pipe
potentially would have leaked again because of the washer. The root cause for
the pipe was the use of a cheaper alternative. It also highlighted the cost
increase to maintenance had indirectly been because of a cost cutting
directive.”
To
define a problem, you will need to consider the downstream effects of what you
and other stakeholders consider to be the problem and what it means to your
organisation. There are two parts to a problem what has caused it and what are
its consequence? Understanding these causes will help you chose how you
respond. The consequences of a problem will help in identifying relevant
benefits. Showing that investment can work to the objectives in this case,
those objectives will later provide an opportunity to identify alternatives.
Problems
that are not well-defined make it harder for decision makers, reducing the
chance of success. This can result in projects that results in less than fit
for purpose results. Either too little in the way of funds and resources, or
too many working on low-priorities. The worst case is lack of resources to
solve a major challenge.
Success
is through clearly understanding the problem and benefits from the beginning. This
will enable everyone to be on the same page, agreeing to the same expectations
and results. Aligning results to the organisations priorities and effectively
addressing the right problem. The idea of a well understood problem is that it
will potentially highlight an opportunity for better results.
Explaining the problem
This
is the elevator pitch, if you were trapped in the elevator with the key
stakeholder who was the approver of the funds needed. How do you relate the
issue in 90 seconds? That pitch needs to clearly identify the issue, providing
the evidence that supports your statement and the solution with definable
measure of success.
Problems
that are ill defined can result in benefits that do not align and undermine your
entire argument for the case. Businesses want to understand how much of a
problem it is? The goal being a call to action. It should have both cause and
consequence, answering both the ‘Why?’ And…’ Questions logically linked. A
great starting point is identifying the consequences of doing nothing?
Your
pitch will never be perfect, potentially changing as more information is
gathered. It will be tested against evidence and morph from its original state,
be prepared for change. The challenge is to go into this exercise without any preconceived
solutions. As further evidence is presented it will develop your understanding
and result in a better result, and a stronger foundation to build your case.
Mistakes in identifying problems
Many
people go into identifying problems sure of the solution, especially when it
comes to technology. In the technology space providers and IT specialist believe
their solutions will provide the answers to any problem. Its just a matter of
shoe-horning those problems into that solution.
Avoid
simply identifying the problem as a system failure, this has a tendency to drive
the results which usually does not align to the facts and the issue. Again, go
back to “so what?”. What is the evidence that will give you a confidence that a
problem exists? You must present that evidence to explain your rational.
Always
note where you found the evidence as you develop your pitch, it’s always harder
if you try to retrofit a problem with evidence. One of the best tests I would
use is called the “Mum Test”, find someone who is not related to the case to
read the pitch and benefits, ask them, “Does this make sense?”. For me, when I
was an interface designer I would as my mother if she could carry out a specific
task using that interface. With no instructions, I would see what she would do
to achieve the results. The idea is to remove the element of assumption, as we
don’t always know who the audience will be, we need to make sure it is clear
without having to be there to explain.
Benefits, what are they?
When
you understand the problem and its consequence most people will understand the
benefit of doing something about it. A benefit gives a measurable improvement,
showing the value gained. The consequence of a problem helps to identifying the
relevant benefit that lead to your objective.
They
should clearly align to the problem that links to the results your organisation
is looking to achieve. Explain the impact which credits to the solutions.
Justify the cost of both money and effort which are supported by demonstratable
returns.
Measuring those returns
The
best way to show a return is by having a measure based on current and future states.
Everyone will have a different measure of value, so there needs to be some more
good questions.
This
is the old “WIIFM”, (What’s in it for me). How are you going to show the value
you are declaring?
·
What will be the return to the organisation or its customers?
·
How will you measure and prove that benefit?
·
How will you show the connection of the benefit to the results?
These
are just a few points to consider when defining and showing benefits in a
project. These measures are to be defined with your stakeholders as they are
the people who will confirm the returns on investment (ROI). They need to be
identifiable, measurable and proven.
Prioritisation
As
you define your problems and benefits there is a need to priorities each of
them. It’s not an exercise in the level of investment is directed to fix the
problem but more enabling better decisions between available alternatives,
making sure you get the best bang for your buck. This will enable focus and to
direct both funds and effort in future, more importantly you can control scope.
These
priorities will enable better and more directed decisions when you may not get
all the funds you expect. A small problem which has Signiant results for an
organisation or its customers, compared to a large problem which has limited
impact will give a signal of investment. But it raises the question to the
larger problem, has it been clearly identified? And are the consequences fully understood.
All
of these are good questions and will need further examination. This is not an
exact science, but it is a major step in the right direction.
What tool can help with this process?
A
technique used to ensure robust discussion and thinking is carried out up-front
in a project is Investment Logic Mapping (ILM). It is a great tool to use
before a solution is identified and before any investment decision is made.
ILM
provides a way of identifying problems that need to be addressed. This will
identifying benefits hoped to be gained, more importantly how the project will confirm
the rational, showing the realisation of those benefits. The ILM tool is used
for complex investments but is recommended for any project and will enable the
ability to communicate that information on a single page.
Should you use ILM?
Many
organisations trigger this process based on the investment. It is something
that is not compulsory but is recommended especially for complex, high-risk or multiparty
proposals.
In
practice it should form the start of all projects, as the output forms the
foundation of your entire business case. The degree and level that you engage
is determined on the size, complexity and value of the project, but the format
and principles will always be useful to defining your problems and how the
benefits will address them.
As
the project progresses it will increase the project focus and clarity, helping
in defining an agreed scope and result, which will save debate and discussion
later in the project. It will become a powerful tool that will provide you
leverage in justifying your expenses of both funds and effort.
How does it work?
Using
a facilitator, key stakeholders in a couple of workshops will discover:
·
Your problems and consequences, then
·
The outcomes and benefits.
·
These workshops will build an alignment on the purpose of the
investment, it may not necessarily lead to an agreement, but it will be a
start.
What can you expect from an ILM
workshop?
You
should expect to have a single page flowchart that will be written in plain
English. It will define your problems to be addressed, potential benefits of
your investment, and how you will confirm those benefits. It will become the
underpinning logic around your project investment.
ILM
workshops will be a series of time-limited engagements up to two hours each. It
will bring together the accountable stakeholders for the benefits realisation.
It should be low-cost and low-effort that will produce new information. It will
bring together all available information to enable a better understanding,
leading to better results.
Problem
owners need to prepare by checking their evidence, identifying the right
stakeholders for the workshops and offering their opinion and expertise. The
right stakeholders are those who have identified and understood the business
problems, provide the evidence the problem is real, and is responsible for
delivering the benefits. Other stakeholders are those people responsible for
giving advise around the investment to the project. This will increase the
value of the workshops, avoiding the risk of having to start again. Most would
have already been engaged from the start.
The
problem owner needs to drive the effort, talking with the right stakeholders
and their willingness to contribute will lead the to the right pitch when
presenting your case.

- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
Researchers from the Brookhaven National Laboratory of the US Department of Energy have developed a novel machine learning (ML) framework that can pinpoint which phases of a multistep chemical reaction can be adjusted to boost productivity.
The method could aid in the development of catalysts, which are chemical “dealmakers” that speed up reactions. It was created to investigate the conversion of carbon monoxide (CO) to methanol using a copper-based catalyst.
The reaction is made up of seven relatively simple elementary steps and was used as an example by the researchers in their ML framework method. The goal was to determine which elementary step or subset of steps in the reaction network controls the catalytic activity.
Traditionally, researchers attempting to improve such a reaction would calculate how changing each activation barrier one at a time might affect the overall production rate. This type of analysis could determine which steps were “rate-limiting” and which steps determined reaction selectivity—that is, whether the reactants proceeded to the desired product or via an alternate pathway to an undesirable by-product.
The new machine learning framework is intended to improve these estimates so that scientists can more accurately predict how catalysts will affect reaction mechanisms and chemical output. The scientists began by collecting data to train their machine learning model. The data set was created using “density functional theory” (DFT) calculations of the activation energy required to transform one atom arrangement to the next over the course of the reaction’s seven steps.
The scientists then used computer simulations to see what would happen if they changed all seven activation barriers at the same time – some going up, some going down, some individually, and come in pairs.
They generated a comprehensive dataset of 500 data points by simulating variations in 28 “descriptors,” which included the activation energies for the seven steps as well as pairs of steps changing two at a time. This dataset forecasted how individual and pairwise tweaks would affect methanol production. The model then ranked the 28 descriptors in terms of their significance in driving methanol output.
The scientists retrained the ML model using only the six “active” descriptors after identifying the important descriptors. Based solely on DFT calculations for those six parameters, this improved ML model was able to predict catalytic activity.
The model, according to the team, can also be used to screen catalysts. The model predicts a maximum methanol production rate if it can design a catalyst that improves the value of the six active descriptors.
When the researchers compared their model predictions to the experimental performance of their catalyst—as well as the performance of alloys of various metals with copper—the predictions matched the experimental findings. Comparisons of the ML approach to the previous method for predicting alloy performance revealed that the ML method was far superior.
The data also revealed a lot about how changes in energy barriers might affect the reaction mechanism. The interaction of the various steps of the reaction was of particular interest—and importance. For example, the data showed that lowering the energy barrier in the rate-limiting step alone would not improve methanol production in some cases. However, increasing methanol output by adjusting the energy barrier of a step earlier in the reaction network while keeping the activation energy of the rate-limiting step within an ideal range.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
Earlier this week, two memorandums of understanding (MoUs) were renewed at the 7th India-Canada Joint Science and Technology Cooperation Committee (JSTCC) meeting. The MoUs were signed by the Indian Ministry of Science and Technology with the Natural Sciences and Engineering Research Council of Canada (NSERC) and National Research Council Canada (NRC), respectively, under the 2005 Agreement for Scientific and Technological Cooperation.
According to a press release, the focus areas of the collaboration include national missions, quantum computing, artificial intelligence (AI), and cyber-physical systems, among others. An official at the event pointed out that a large number of Indian students are studying in Canadian universities and the renewal of the MoUs would help intensify the exchange of ideas and expertise between the two countries. Representatives from several ministries and research institutions from both countries attended the meeting.
India and Canada benefit from strong bilateral relations and are committed to deepening ties, with science, technology, and innovation being key pillars of the relationship, the release noted. Under the terms of the agreement made in 2005, the JSTCC meets every two years to review ongoing collaborations between Canadian and Indian researchers and set priorities for the next period in fields like agriculture and food security, healthcare and healthtech, clean technologies and environmental research, marine and polar research, quantum tech and AI, and human capacity development and researcher mobility. Both countries agreed to continue monitoring progress on key priorities in bilateral science, technology, and innovation projects during the 2022-2024 period.
India plays an active role in the global technology research and development ecosystem by facilitating academic and scientific relationships with other countries. In March, India and Finland worked out a detailed plan to establish an Indo-Finnish Virtual Network Centre on Quantum Computing, for which India has already identified the three institutes that will work with Finnish counterpart institutions. Last month, India and Israel held a two-day workshop that explored photonics-based quantum computing, sensing, encryption, quantum magnetometry, atomic clocks, and free-space quantum communication.
At the beginning of May, the Indo-German Science and Technology Centre (IGSTC) proposed setting up a joint AI initiative for start-ups, research, and applications in healthcare and sustainability. The two sides are already collaborating on electric mobility, cyber-physical system, quantum technologies, future manufacturing, green hydrogen fuel, and deep ocean research.
Most recently, India and Japan held a working group meeting to discuss enhancing cooperation in 5G, Open Radio Access Networks (O-RAN), telecom network security, submarine cable systems, massive MIMO (multiple-input, multiple-output), connected cars, quantum communications, and 6G innovation.
OpenGov Asia reported that the countries recognised the need to nurture cooperation to grow the digital economy through joint digital transformation projects in areas like the Internet of Things (IoT) and AI. They also discussed providing opportunities to Indian IT professionals to work with Japanese firms. 2022 marks the 70th anniversary of India-Japan Diplomatic relations. Being a key driver of development, ICT provides opportunities for the countries to jointly build a robust digital foundation for the present and future world. The 7th JWG agreed to enhance cooperation in ICT areas under a memorandum of cooperation (MoC).
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
Researchers from the California Institute of Technology (Caltech) discovered that a deep-learning technology tag, known as Neural-Fly, could assist flying robots known as “drones” in adapting to any weather conditions.
Drones are now flown under controlled conditions, without wind, or by people using software or remote controls. The flying robots have been trained to take off in formation in the open air, although these flights are typically undertaken under perfect conditions.
However, for drones to autonomously perform important but mundane duties, such as package delivery or airlifting injured drivers from traffic accidents, they must be able to adapt to real-time wind conditions.
With this, a team of Caltech engineers has created Neural-Fly, a deep-learning technology that enables drones to adapt to new and unexpected wind conditions in real-time by merely adjusting a few essential parameters. Neural-Fly is discussed in newly published research titled “Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds” in Science Robotics.
The issue is that the direct and specific effect of various wind conditions on aircraft dynamics, performance, and stability cannot be accurately characterised as a simple mathematical model.
– Soon-Jo Chung, Bren Professor of Aerospace and Control and Dynamical Systems and Jet Propulsion Laboratory Research Scientist
Chung added that they employ a combined approach of deep learning and adaptive control that enables the aircraft to learn from past experiences and adapt to new conditions on the fly, with stability and robustness guarantees, as opposed to attempting to qualify and quantify each effect of the turbulent and unpredictable wind conditions they frequently encounter when flying.
Neural-Fly was evaluated at Caltech’s Center for Autonomous Systems and Technologies (CAST) utilising its Real Weather Wind Tunnel, a 10-foot-by-10-foot array of more than 1,200 tiny computer-controlled fans that enables engineers to mimic everything from a mild breeze to a gale.
Numerous models derived from fluid mechanics are available to researchers but getting the appropriate model quality and tweaking that model for each vehicle, wind condition, and operating mode is difficult.
Existing machine learning methods, on the other hand, demand massive amounts of data for training, but cannot match the flying performance attained by classical physics-based methods. Adapting a complete deep neural network in real-time is a monumental, if not impossible, undertaking.
According to the researchers, Neural-Fly addresses these challenges by utilising a technique known as separation, which requires only a few parameters of the neural network to be altered in real-time. This is accomplished using their innovative meta-learning technique, which pre-trains the neural network so that only these critical parameters need to be changed in order to successfully capture the changing environment.
After only 12 minutes of flying data, autonomous quadrotor drones outfitted with Neural-Fly learn how to respond to severe winds so well that their performance improves dramatically as judged by their ability to precisely follow a flight route.
When compared to drones equipped with current state-of-the-art adaptive control algorithms that identify and respond to aerodynamic effects but lack deep neural networks, the error rate following that flight path is between 2.5 to 4 times lower.
Landing may appear more difficult than flight, however, Neural-Fly can learn in real-time, unlike previous systems. As a result, it can react on the fly to wind variations and does not require post-processing.
In-flight tests were done outside of the CAST facility; Neural-Fly functioned just as well as it did in the wind tunnel. Additionally, the researchers showed that flight data collected by one drone can be transferred to another, establishing a knowledge pool for autonomous cars.
The drones were outfitted with a typical, off-the-shelf flight control computer utilised by the drone research and enthusiast communities. Neural-Fly was built into an onboard Raspberry Pi 4 computer, which is the size of a credit card and costs roughly $20.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
The Council for Indian School Certificate Examinations (CISCE) and the Indian Institute of Technology of Delhi (IIT- Delhi) announced they will jointly design a curriculum for schools that include robotics, artificial intelligence, machine learning, and data science. The curriculum is for grades 9 to 12 in schools affiliated with the CISCE board.
IIT-Delhi’s technology innovation hub, I-Hub Foundation for Cobotics (IHFC), and CISCE signed a memorandum to carry out the project. According to a report by the government’s AI portal, IHFC would help CISCE cut the syllabus to “reinforce 21st-century skills” and achieve targets set out in the National Education Policy (NEP) 2020. Moreover, officials stated that they plan to upgrade the current STEM courses in line with NEP 2020.
A representative from IHFC stressed the need to strengthen the country’s capacity to master emerging technologies. As IHFC develops the curriculum, it will reflect the principles of experiential learning and aspects of theory. IHFC could play an important role in carrying out the project in about 2,700 schools affiliated with CISCE by providing guidance and technical expertise. Prime aspects of the project’s vision, according to the project director of IHFC, are nurturing teamwork, innovation, and knowledge to bridge the gaps between young engineering students and potential future robot enthusiasts.
To bolster the rate of digital literacy in the country, state governments have been urged to offer courses and initiatives in AI and other emerging technologies. Earlier this year, the Indian Institute of Technology of Madras (IIT-Madras)’s Robert Bosch Centre for Data Science and Artificial Intelligence (RBC-DSAI) invited students for the national-level ‘Summer Internships 2022’ programme. The goal was to help students gain hands-on experience working on cutting-edge discoveries, with some of the country’s leading experts in data science and AI.
In March, IIT-Madras announced an 18-month web-enabled, user-oriented Master of Technology (MTech) programme in Industrial AI. The course was released in collaboration with a private player to upskill working professionals and encourage the use of AI to address industrial problems. The programme used labs and included theoretical courses in fundamental mathematical techniques required to understand data science algorithms, time series analyses, multivariate data analyses, machine learning, deep learning, and reinforcement learning. Applied courses described the implementation of AI solutions for industrial problems in a case study format. Put together, these courses provided strong theoretical foundations and useful application perspectives.
Furthermore, in April, the Madhya Pradesh Chief Minister announced a 240-hour course on AI for students from Grade 8 scheduled for July. The decision regarding the commencement of the course was taken during the state cabinet’s two days brainstorming session held in Pachmarhi. The course will initially be unveiled in 53 schools, but more are expected to be added to the list later on, as OpenGov Asia reported. The government also said it would provide 40 computers to each of the selected schools.
Apart from education, the government is using AI in several fields, including managing traffic flows, improving digital exchange systems, and quickening criminal investigation processes. In the industry, AI is being deployed for several operations like inventory tracking and management, data sharing and perception, enhanced customer experience, improved hiring processes, data mining, and optimisation. The AI market in India is expected to grow at a five-year compound annual growth rate (CAGR) of 20.2% and reach US$7.8 billion in total revenues by 2025.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
Researchers from the University of Missouri used artificial intelligence (AI) to analyse the publicly available data from about 16,000 participants enrolled in the T1D Exchange Registry database to learn more about people with Type 1 diabetes.
The information was acquired by the team using health informatics in order to gain a better understanding of the disease, and they were sponsored in part by a grant from the National Science Foundation in the United States.
According to Chi-Ren Shyu, one of the researchers, they delegate the task of connecting millions of dots in the data to the computer so that it could identify major contrasting patterns between people who had a family history of Type 1 diabetes and people who did not have such a history. Additionally, they had the computer perform the statistical testing so that they could be sure of the accuracy of their findings.
“We let the computer do the statistical testing to make sure we are confident in our results,” said Shyu.
The investigation carried out by the researchers came up with a few results that were surprising. Citing an example when they discovered that people who had an immediate family member with type 1 diabetes were more likely to be diagnosed with hypertension, in addition to diabetes-related nerve disease, eye disease, and kidney disease.
This information was gleaned from the registry of people with diabetes. Erin Tallon, who was the principal author of the study, said that the findings prove that real-world data and artificial intelligence have utility.
Using AI, researchers are attempting to discover a solution to the issue that persons with Type 1 diabetes would manifest the symptoms in various ways. Diabetes Type 1 is not a distinct disease that presents itself in the same way for everyone who has it. By doing analysis on data gathered from the real world, they can gain a better knowledge of the risk variables that may put a person at a higher risk for developing bad health outcomes.
Researchers are hoping that their discoveries can help solve a more widespread issue by employing more extensive data sets that are population-based. They are also planning to construct larger patient cohorts, do additional data analysis, and make use of these algorithms or artificial intelligence to assist them.
An algorithm for mining contrast patterns can identify statistically significant deviations in the distribution of attribute frequencies between two patient categories. The validated approach was utilised by the researchers to find individual and co-occurring traits that were observed considerably more frequently in familial Type 1 diabetes compared to sporadic cases of Type 1 diabetes.
They talk about these traits using the term “patterns” or “feature patterns.” While the method returns feature patterns with one, two, or three components, these patterns might have any number of elements. The terms “individual elements” and “individual characteristics” are interchangeable.
This study is the largest one done to date to compare the health outcomes of patients with familial and sporadic types of Type 1 diabetes over a longer period of time. It was conducted using data from the T1D Exchange Clinic Registry with thousands of participants.
The current findings need to be validated in a larger population-based cohort by the conduct of additional research. This approach has the potential to be modified to assist in the creation of individualised treatment options for diabetic patients.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
The Chief Minister of the state of Tamil Nadu recently launched an artificial intelligence (AI)-enabled panic button and CCTV surveillance project to make travel safer. The initiative will be implemented in phases, aiming to cover 2,500 buses in Chennai city. Under the first phase, 500 buses in the metro city have installed four panic buttons, an AI-enabled mobile network video recorder (MNVR), and three cameras each.
According to a report by the government’s AI portal, the MNVR will be connected to a cloud-based control centre via a 4G GSM SIM card. In case of inconvenience, discomfort, or threat, passengers will be able to press the panic button and record the incident. At the same time, an alarm will go off at the control centre along with a video recording of the incident on the bus. The operator at the control centre will be able to monitor the situation and facilitate, in real-time, the next course of action, the report said.
The control centre has been linked to the distress response centre of the city police and Greater Chennai Corporation. The state government has noted that 31 bus depots and 35 bus terminuses of the Metropolitan Transport Corporation (MTC) have been brought under surveillance. The project will also help detect missing persons and identify criminals and other works of the GCC, transport department, and the police.
Other states across the country have also deployed technology-enabled solutions to better monitor traffic. In April, the Ministry of Electronics and Information developed and launched indigenous onboard driver assistance and warning system (ODAWS), a bus signal priority system, and a Common Smart IoT Connectiv (CoSMiC) software to improve road safety.
ODAWS uses sensors to monitor driver propensity and vehicle surroundings and send out acoustic and visual alerts. The ODAWS algorithm is used to interpret sensor data and offer real-time notifications to the driver. The bus signal priority system is an operational strategy that modifies normal traffic signal operations to better accommodate in-service public buses at signal-controlled intersections.
CoSMiC is middleware software that provides the standard-based deployment of the Internet of things (IoT), which follows the oneM2M-based global standard. It facilitates users and application service providers in vertical domains to use application-agnostic open standards and interfaces for end-to-end communication with well-defined common service functionalities. The CoSMiC common service layer is used to interface with any vendor-specific standards and to increase interoperability with smart city dashboards.
More recently, the Kerala state government announced it would deploy AI-based cameras on traffic-heavy roads in a bid to reduce accidents by half within the next two years. As OpenGov Asia reported, the government expects traffic rules to be more effectively enforced after the software is put in place, as it automates detecting road violations and issuing fines. Once the system captures the breach of the road rules, the footage will be sent to the central government’s server. The vehicle owner will receive an SMS regarding the fine, and, at the same time, the information will directly be sent to court. This will reduce corruption as it limits local authorities from waiving the penalty.
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
The United States’ National Science Foundation backs the researcher from the Massachusetts Institutes of Technology (MIT) who made the robotic mini-cheetah to run rapidly through AI and machine learning. The robot cheetah broke the record for the quickest run by adapting to terrain variations through simulation.
The scientists trained the robot cheetah using a “learn by experience” technique. Humans have created robots that can walk, lift, and jump, but quick and efficient running is not one of them. Until now, that is. Running necessitates robots to respond quickly to changes in the environment and terrain.
The team taught the robot cheetah how to adapt to changes in its environment while in motion using the learn by experience paradigm, artificial intelligence, and machine learning. Using simulated scenarios, the robot can quickly experience and learn from varied terrains.
According to the researchers, manually training robots to adapt is a time-consuming, labour-intensive, and tiresome process. The experts believe that teaching robots to teach themselves could solve the scalability problem and allow robots to develop a broader set of abilities and tasks. They have now begun to apply their method to a broader range of robotic systems.
Researchers from MIT’s Improbable AI Lab, part of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and directed by MIT Assistant Professor Pulkit Agrawal along with the Institute of AI and Fundamental Interactions (IAIFI) have been working together. Meanwhile, MIT PhD student Gabriel Margolis and IAIFI postdoc Ge Yang demonstrated the cheetah’s speed.
Fast running necessitates pushing the hardware to its limitations, such as operating near the maximum torque output of motors. In such cases, the robot dynamics are difficult to represent analytically. The robot must react swiftly to changes in the environment, such as when it comes into contact with ice while running on grass.
When the robot walks, it moves slowly, and the presence of snow is usually not an issue. Consider how you could negotiate practically any terrain if you walked slowly but carefully. Today’s robots confront a similar dilemma. The issue is that travelling across all terrains as if walking on ice is wasteful, but it is widespread among today’s robots. Humans adapt by running swiftly on grass and slowing down on ice.
Giving robots similar adaptability necessitates rapid detection of terrain changes and rapid adaptation to prevent the robot from falling over. In general, high-speed running is more difficult than walking because it is hard to create analytical (human-designed) models of all potential terrains in advance, and the robot’s dynamics become more complex at high speeds.
Programming a robot’s actions is tough, according to researchers. Human engineers must manually alter the robot’s controller if it fails on a particular terrain. However, humans no longer need to programme robots’ every move if the robot can explore many terrains and improve with practice.
Researchers added that the modern simulation tools allow the robot to gain 100 days of experience in just three hours. They also developed a method by which a robot’s behaviour improves from simulated experience and is successfully deployed in the actual world. The robot’s running talents function in the actual world because some of the simulator’s environments teach it real-world skills. The controller finds and executes essential talents in real-time.
Artificial intelligence research balances what humans must develop with what machines can learn on their own. Humans tell robots what to do and how to accomplish it. Such a system isn’t scalable because it would take significant human engineering effort to manually design a robot to operate in numerous contexts.