The White House Office of Science and Technology Policy (WHOSTP) is currently developing a bill of rights for artificial intelligence (AI). WHOSTP Science Advisor Eric Lander, along with other academics, government officials and professionals, is working on the initiative. Despite some opposition, AI rights could reduce AI discrimination by having a national set of guidelines. 


In particular, WHOSTP is requesting a public forum for more information on technologies used to identify people and infer attributes, such as facial and voice recognition. The research for the bills starts here because these technologies are already widely used among individuals and companies. In addition, WHOSTP recently announced that it will launch a series of listening sessions and events next week to engage the American public in the process of developing a Bill of Rights.


Although AI has had a bad reputation of misidentifying and wrongly tying Black individuals to criminal databases, advocates have identified bias in AI and can use the technology to improve racial equity. 


The Good AI, a company creating the next generation of AI tech, highlighted  Black women working to foster AI equity. Latanya Sweeney, Ph.D., a professor of government and technology in residence at Harvard University, launched a newly emerging area of study known as algorithmic fairness, which is the research of understanding and correcting bias. Ruha Benjamin, an associate professor of African American studies at Princeton University,  founded Just data lab, a lav rethinking and retooling the relationship between stories, technology and justice.


According to a VentureBeat report, other women are overcoming the historical patterns of abuse. Amazon Web Services (AWS) executives went up against AI researchers when a 2019 research paper revealed AI facial analysis more readily misidentified dark-skinned women. More than 70 AI researchers defended the validity of the study that found AI bias in the industry. 


According to a Wired report, Black-led companies are refusing funding from Google, which has inflicted tremendous harm for the last few months in communities that weigh heavily on the organizations. In a joint statement released in March, Black in AI, Queer in AI, and Widening NLP called to protest Google’s treatment of its former ethical AI team leaders Timnit Gebru, Margaret Mitchell and former HBCU recruiter April Christina Curley​​.


Companies are taking measures by holding Congress accountable to minimize AI-generated bias in online users and employment. 


In a Guardian report, Meta, formerly known as Facebook, is the latest company to delete its facial recognition (1bn faceprints) for photo tagging, citing concerns with the technology. They discontinued the tool because of the lack of privacy transparency controls to limit how users’ faces are used. They are also stopping fingerprint and auto-generated suggestions for the visually impaired.  Meta’s decision adds to the list of privacy issues the company is currently dealing with.


After last year’s racial unrest, other companies like Amazon and Microsoft have stopped facial recognition being sold to police. The companies will not make any changes until there is a national law on human rights. International Business Machines Corporation (IBM) decided to leave the facial recognition business until Congress creates better AI policies. 


The U.S. is actively investing more into AI but taking more time to create laws that govern AI.


Federal spending on AI rose to $1 billion in 2020, according to a Data Integration report. This is a signal that federal agencies are in favor of AI advancement, yielding $13 trillion in economic benefits by 2030.


In September, the U.S. Department of Commerce announced that it would create a committee to advise federal agencies on AI research and developments called the National Artificial Intelligence Advisory Committee. The committee will focus on many issues around AI, such as the current state of U.S. competitiveness and how AI can enhance opportunities for different geographic regions. 


A Government Accountability Office report examines how many government agencies are broadening their use of facial recognition. However, the report cautions that agencies must improve the trust in AI and develop equitable standards for using the technology.

Sponsored Series: This reporting is made possible by the The Ewing Marion Kauffman Foundation


The Ewing Marion Kauffman Foundation is a private, nonpartisan foundation based in Kansas City, Mo., that seeks to build inclusive prosperity through a prepared workforce and entrepreneur-focused economic development. The Foundation uses its $3 billion in assets to change conditions, address root causes, and break down systemic barriers so that all people – regardless of race, gender, or geography – have the opportunity to achieve economic stability, mobility, and prosperity. For more information, visit www.kauffman.org and connect with us at www.twitter.com/kauffmanfdn and www.facebook.com/kauffmanfdn.