Search
Close this search box.

We are creating some awesome events for you. Kindly bear with us.

WEF paper proposes principles to prevent discriminatory outcomes in machine learning

WEF paper proposes principles to prevent discriminatory outcomes in machine learning

The World Economic Forum (WEF)’s Global
Future Council on Human Rights
 recently issued
a white
paper
to provide a framework for developers to prevent discrimination in
the development and application of machine learning (ML). The paper is based on
research and interviews with industry experts, academics, human rights
professionals and others working at the intersection of machine learning and
human rights.

The paper proposes a framework based on four guiding
principles – active inclusion, fairness, right to understanding, and access to
redress – for developers and businesses looking to use machine learning.

Artificial intelligence systems based on machine learning are
already being used to make decisions which have significant, life-altering
impact on people, such as hiring of job applicants, granting loans and
releasing prisoners on parole.  Machine
learning systems can help to eliminate human bias in decision-making, but they
can also end up reinforcing and perpetuating systemic bias and discrimination.

Concerns around opacity,
data, algorithm design

Previous algorithmic decision-making systems relied on
rules-based, “if/then” reasoning. But ML systems create more complex models in
which it is difficult to understand why and how decisions were made.

ML systems are opaque, due to their complexity and also to
the proprietary nature of their algorithms. Moreover, ML systems today are
almost entirely developed by small, homogenous teams, most often of men. The
massive data set required to train the sysem are often proprietary and require
largescale resources to collect or purchase. This effectively excludes many
companies, public and civil society bodies from the machine learning market. Though there is increasing availability of open data, companies who own massive proprietary datasets continue to enjoy definite advantages.

Training data may exclude classes of individual who do not
generate much data, such as those living in rural areas of low-income
countries, or those who have opted out of sharing their data. The report
presents an example where this might lead to discrimination. If an
application’s training data demonstrates that people who have influential
social networks or who are active in their social networks are “good”
employees, it might filter out people from lower-income backgrounds, those who
attended less prestigious schools, or those who are more cautious about posting
on social media.

Similarly, loan applicants from rural backgrounds, with less
digital infrastructure, could be unfairly excluded by algorithms trained on
data points captured from more urban populations.

Data may be biased or error-ridden. For instance, using
historical data might result in the ML system judging women to worse hires than
men, because historically women have been promoted less than men. Whereas the
actual reason is that workplaces have historically been biased.

Even if the data is good, the paper identifies five ways in which
design or deployment of ML algorithms could encode discrimination: choosing the
wrong model (or the wrong data); building a model with inadvertently
discriminatory features; absence of human oversight and involvement;
unpredictable and inscrutable systems; or unchecked and intentional
discrimination.

The authors cite examples of systems that disproportionately
identify people of colour as being at “higher risk” for committing a crime or
re-offending or which systematically exclude people with mental disabilities
from being hired. The risks are higher in low- and middle-income countries,
where existing inequalities run deeper, availability of training data is
limited, and government regulation and oversight are weaker.

Four proposed principles
for businesses

The paper notes that governments and international organisations
have a role to play but regulation tends to lag technological development. However,
even in the absence of regulation, the paper says that businesses need to
integrate principles of non-discrimination and empathy into their human rights
due diligence.

As part of ‘Active Inclusion’, the paper recommends that development
and design of ML applications must actively seek a diversity of input,
especially of the norms and values of specific populations affected by the
output of AI systems.

The second principle proposed is ‘Fairness’. People involved
in conceptualising, developing, and implementing ML systems should consider
which definition of fairness best applies to their context and application, and
prioritize it in the architecture of the machine learning system and its
evaluation metrics.  

To ensure ‘Right to Understanding’, the involvement of ML
systems in decision-making that affects individual rights must be disclosed. Also,
the systems must be able to provide an explanation of their decision-making
that is understandable to end users and reviewable by a competent human
authority. If that is impossible and human rights are at stake, the paper
states that leaders in the design, deployment and regulation of ML technology
must question whether or not it should be used.

The paper also proposes that leaders, designers and developers
of ML systems must make visible avenues for redress for those affected by
disparate impacts, and establish processes for the timely redress of any
discriminatory outputs.

To help companies adopt these principles, the paper recommends
that companies should identify human rights risks linked to business operations.
Common standards could be established and adopted for assessing the adequacy of
training data and its potential bias through a multistakeholder approach.

It is proposed that companies work on concrete ways to
enhance company governance, establishing or augmenting existing mechanisms and
models for ethical compliance. Additionally, companies should monitor their
machine learning applications and report findings, working with certified
third-party auditing bodies. Results of audits should be made public, together
with responses from the company. The authors say that large multinational
companies should set an example by taking the lead in this.

The authors express hope that this report will advance
internal corporate discussions of these topics as well as contribute to the
larger public debate.

“We encourage companies working with machine learning to
prioritize non-discrimination along with accuracy and efficiency to comply with
human rights standards and uphold the social contract,” said Erica Kochi,
Co-Chair of the Global Future Council for Human Rights and Co-Founder of UNICEF
Innovation.

Nicholas Davis, Head of Society and Innovation, Member of
the Executive Committee, World Economic Forum, said, “One of the most important
challenges we face today is ensuring we design positive values into systems
that use machine learning. This means deeply understanding how and where we
bias systems and creating innovative ways to protect people from being
discriminated against.”  

Read the paper here.

PARTNER

Qlik’s vision is a data-literate world, where everyone can use data and analytics to improve decision-making and solve their most challenging problems. A private company, Qlik offers real-time data integration and analytics solutions, powered by Qlik Cloud, to close the gaps between data, insights and action. By transforming data into Active Intelligence, businesses can drive better decisions, improve revenue and profitability, and optimize customer relationships. Qlik serves more than 38,000 active customers in over 100 countries.

PARTNER

CTC Global Singapore, a premier end-to-end IT solutions provider, is a fully owned subsidiary of ITOCHU Techno-Solutions Corporation (CTC) and ITOCHU Corporation.

Since 1972, CTC has established itself as one of the country’s top IT solutions providers. With 50 years of experience, headed by an experienced management team and staffed by over 200 qualified IT professionals, we support organizations with integrated IT solutions expertise in Autonomous IT, Cyber Security, Digital Transformation, Enterprise Cloud Infrastructure, Workplace Modernization and Professional Services.

Well-known for our strengths in system integration and consultation, CTC Global proves to be the preferred IT outsourcing destination for organizations all over Singapore today.

PARTNER

Planview has one mission: to build the future of connected work. Our solutions enable organizations to connect the business from ideas to impact, empowering companies to accelerate the achievement of what matters most. Planview’s full spectrum of Portfolio Management and Work Management solutions creates an organizational focus on the strategic outcomes that matter and empowers teams to deliver their best work, no matter how they work. The comprehensive Planview platform and enterprise success model enables customers to deliver innovative, competitive products, services, and customer experiences. Headquartered in Austin, Texas, with locations around the world, Planview has more than 1,300 employees supporting 4,500 customers and 2.6 million users worldwide. For more information, visit www.planview.com.

SUPPORTING ORGANISATION

SIRIM is a premier industrial research and technology organisation in Malaysia, wholly-owned by the Minister​ of Finance Incorporated. With over forty years of experience and expertise, SIRIM is mandated as the machinery for research and technology development, and the national champion of quality. SIRIM has always played a major role in the development of the country’s private sector. By tapping into our expertise and knowledge base, we focus on developing new technologies and improvements in the manufacturing, technology and services sectors. We nurture Small Medium Enterprises (SME) growth with solutions for technology penetration and upgrading, making it an ideal technology partner for SMEs.

PARTNER

HashiCorp provides infrastructure automation software for multi-cloud environments, enabling enterprises to unlock a common cloud operating model to provision, secure, connect, and run any application on any infrastructure. HashiCorp tools allow organizations to deliver applications faster by helping enterprises transition from manual processes and ITIL practices to self-service automation and DevOps practices. 

PARTNER

IBM is a leading global hybrid cloud and AI, and business services provider. We help clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 3,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM’s hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM’s breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM’s legendary commitment to trust, transparency, responsibility, inclusivity and service.