We are creating some awesome events for you. Kindly bear with us.

U.S. DOE creates AI Advancement Council

The Department of Energy (DOE) is set to be a leader in the future of AI and get the most out of AI, it needs to work with businesses, universities, and other government agencies to form partnerships. A charter was recently approved by David Turk, U.S. Deputy Energy Secretary to establish the first Artificial Intelligence Advancement Council (AIAC) – a group that will oversee the AI governance, innovation and ethics.

AIAC’s job is to coordinate AI activities and set AI priorities for the DOE to improve national and economic security and competitiveness. The Office of Artificial Intelligence and Technologies (AITO), which oversees the larger DOE AI strategy, will look to the AIAC for advice on AI strategies and plans for putting them into action.

The AIAC will be made up of the Under Secretary for Science and Innovation (Co-chair), the Under Secretary for Nuclear Security and Administrator of the Security Administration (Co-chair), the Director of the Artificial Intelligence and Technology Office (AIAC Executive Secretariat), the General Counsel of the Office of the General Counsel, and the Director of the Office of Intelligence and Counterintelligence.

Trustworthy AI systems initiatives

The United States has always stood up for and defended important values comprise of making and using trustworthy AI systems in the public and private sectors. To be credible, AI technology must accurately reflect things like being able to be explained and understood, privacy, reliability, robustness, safety, security, or resilience to attacks, and make sure that bias is taken out. During implementation and use, fairness and transparency should be considered.

AI also has broader effects on society, such as changes in the workplace, that need to be thought about. For AI to have a positive social impact that fits with core American values, it must be built and used in an ethical way that reduces bias, promotes fairness, and protects privacy.

To make AI more trustworthy, a multifaceted approach is needed. This includes R&D investments to solve key technical problems, the development of metrics, standards, and assessment tools to measure and evaluate AI’s trustworthiness, participation in the development of AI technical standards, governance approaches for the use of AI in the public and private sectors and preparing a diverse and inclusive workforce for future jobs. It also needs international partnerships and collaborations.

Furthermore, AI innovations can have a limited range of ideas and points of view. This can lead to the creation of systems that keep biases and other systemic inequalities going. This resulted in the National AI Initiative Act of 2020 which led to the creation of the National AI Research Resource (NAIRR) Task Force in June 2021. This was done to fix the growing divide in resources, which could hurt the AI research ecosystem.

The NAIRR Task Force held its fifth virtual, public meeting in the first quarter of 2022. Its goal is to create a vision and implementation plan for a national cyberinfrastructure that would connect American researchers from all backgrounds and regions to the computational, data, and testing resources for all AI initiatives.

The NAIRR could help build AI capabilities across the country and support its responsible development. This would lead to new discoveries and innovations in fields like healthcare, city planning, education, and more, and would keep the U.S. competitive in the global arena.

Send this to a friend