(a) Mission

The Institute shall—

(1) advance collaborative frameworks, standards, guidelines, and associated methods and techniques for artificial intelligence;

(2) support the development of a risk-mitigation framework for deploying artificial intelligence systems;

(3) support the development of technical standards and guidelines that promote trustworthy artificial intelligence systems; and

(4) support the development of technical standards and guidelines by which to test for bias in artificial intelligence training data and applications.

(b) Supporting activities

Ask a business law question, get an answer ASAP!
Thousands of highly rated, verified business lawyers.
Click here to chat with a lawyer about your rights.

Terms Used In 15 USC 278h-1

  • Fiscal year: The fiscal year is the accounting period for the government. For the federal government, this begins on October 1 and ends on September 30. The fiscal year is designated by the calendar year in which it ends; for example, fiscal year 2006 begins on October 1, 2005 and ends on September 30, 2006.
  • Partnership: A voluntary contract between two or more persons to pool some or all of their assets into a business, with the agreement that there will be a proportional sharing of profits and losses.
  • State: means a State, the District of Columbia, the Commonwealth of Puerto Rico, or any other territory or possession of the United States. See 1 USC 7

The Director of the National Institute of Standards and Technology may—

(1) support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems, which may include—

(A) privacy and security, including for datasets used to train or test artificial intelligence systems and software and hardware used in artificial intelligence systems;

(B) advanced computer chips and hardware designed for artificial intelligence systems;

(C) data management and techniques to increase the usability of data, including strategies to systematically clean, label, and standardize data into forms useful for training artificial intelligence systems and the use of common, open licenses;

(D) safety and robustness of artificial intelligence systems, including assurance, verification, validation, security, control, and the ability for artificial intelligence systems to withstand unexpected inputs and adversarial attacks;

(E) auditing mechanisms and benchmarks for accuracy, transparency, verifiability, and safety assurance for artificial intelligence systems;

(F) applications of machine learning and artificial intelligence systems to improve other scientific fields and engineering;

(G) model documentation, including performance metrics and constraints, measures of fairness, training and testing processes, and results;

(H) system documentation, including connections and dependences within and between systems, and complications that may arise from such connections; and

(I) all other areas deemed by the Director to be critical to the development and deployment of trustworthy artificial intelligence;


(2) produce curated, standardized, representative, high-value, secure, aggregate, and privacy protected data sets for artificial intelligence research, development, and use;

(3) support one or more institutes as described in section 9431(b) of this title for the purpose of advancing measurement science, voluntary consensus standards, and guidelines for trustworthy artificial intelligence systems;

(4) support and strategically engage in the development of voluntary consensus standards, including international standards, through open, transparent, and consensus-based processes; and

(5) enter into and perform such contracts, including cooperative research and development arrangements and grants and cooperative agreements or other transactions, as may be necessary in the conduct of the work of the National Institute of Standards and Technology and on such terms as the Director considers appropriate, in furtherance of the purposes of this division.1

(c) Risk management framework

Not later than 2 years after January 1, 2021, the Director shall work to develop, and periodically update, in collaboration with other public and private sector organizations, including the National Science Foundation and the Department of Energy, a voluntary risk management framework for trustworthy artificial intelligence systems. The framework shall—

(1) identify and provide standards, guidelines, best practices, methodologies, procedures and processes for—

(A) developing trustworthy artificial intelligence systems;

(B) assessing the trustworthiness of artificial intelligence systems; and

(C) mitigating risks from artificial intelligence systems;


(2) establish common definitions and characterizations for aspects of trustworthiness, including explainability, transparency, safety, privacy, security, robustness, fairness, bias, ethics, validation, verification, interpretability, and other properties related to artificial intelligence systems that are common across all sectors;

(3) provide case studies of framework implementation;

(4) align with international standards, as appropriate;

(5) incorporate voluntary consensus standards and industry best practices; and

(6) not prescribe or otherwise require the use of specific information or communications technology products or services.

(d) Participation in standard setting organizations

(1) Requirement

The Institute shall participate in the development of standards and specifications for artificial intelligence.

(2) Purpose

The purpose of this participation shall be to ensure—

(A) that standards promote artificial intelligence systems that are trustworthy; and

(B) that standards relating to artificial intelligence reflect the state of technology and are fit-for-purpose and developed in transparent and consensus-based processes that are open to all stakeholders.

(e) Data sharing best practices

Not later than 1 year after January 1, 2021, the Director shall, in collaboration with other public and private sector organizations, develop guidance to facilitate the creation of voluntary data sharing arrangements between industry, federally funded research centers, and Federal agencies for the purpose of advancing artificial intelligence research and technologies, including options for partnership models between government entities, industry, universities, and nonprofits that incentivize each party to share the data they collected.

(f) Best practices for documentation of data sets

Not later than 1 year after January 1, 2021, the Director shall, in collaboration with other public and private sector organizations, develop best practices for datasets used to train artificial intelligence systems, including—

(1) standards for metadata that describe the properties of datasets, including—

(A) the origins of the data;

(B) the intent behind the creation of the data;

(C) authorized uses of the data;

(D) descriptive characteristics of the data, including what populations are included and excluded from the datasets; and

(E) any other properties as determined by the Director; and


(2) standards for privacy and security of datasets with human characteristics.

(g) Testbeds

In coordination with other Federal agencies as appropriate, the private sector, and institutions of higher education (as such term is defined in section 1001 of title 20), the Director may establish testbeds, including in virtual environments, to support the development of robust and trustworthy artificial intelligence and machine learning systems, including testbeds that examine the vulnerabilities and conditions that may lead to failure in, malfunction of, or attacks on such systems.

(h) Authorization of appropriations

There are authorized to be appropriated to the National Institute of Standards and Technology to carry out this section—

(1) $64,000,000 for fiscal year 2021;

(2) $70,400,000 for fiscal year 2022;

(3) $77,440,000 for fiscal year 2023;

(4) $85,180,000 for fiscal year 2024; and

(5) $93,700,000 for fiscal year 2025.