AI regulation is arguably lagging behind the advancement of technology. But many jurisdictions are developing laws and guidance that will affect how automated systems—which can include anything from HR software to fraud-screening products—are developed and used.

The US Office of Science and Technology Policy published a “Blueprint for an AI Bill of Rights” yesterday, setting out five principles to reduce the harms associated with AI systems. Anyone working on or using automated systems should familiarise themselves with this document. 

Like the UK’s recently proposed AI principles, the “Blueprint” is guidance—not binding law (yet). But these five principles represent good practice in AI development and governance and could help organisations avoid discriminatory or unsafe practices, privacy violations, and—perhaps—lawsuits.

US ‘AI Bill of Rights’- Why You Should Care About This Non-Binding Guidance

Principle 1: Safe and Effective Systems

According to the principle of “safe and effective systems”, automated systems should:

  • Be developed with consultation from diverse communities, stakeholders, and domain experts

  • Undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring

  • Should not be designed with the intent or reasonably foreseeable possibility of endangering people

  • Should be designed to proactively protect people from harms stemming from unintended yet foreseeable uses or impacts

  • Should not employ inappropriate or irrelevant data use in their design, development, and deployment

  • Should be subject to independent and, where possible, public evaluation and reporting that confirms the system is safe and effective

The Blueprint provides some illustrative examples of automated systems that have fallen foul of the “safe and effective” principle::

  • An AI model that overstated the likelihood of hospital patients having sepsis, causing “alert fatigue” among staff

  • Social media algorithms that punish “counter-speech” by victims of racist abuse

  • A device designed to help people track lost items being used for stalking

Principle 2: Algorithmic Discrimination Protections

The main tenets of the principle of “algorithmic discrimination protections” are:

  • Take steps in the design, development and deployment of automated systems to protect people and communities from algorithmic discrimination

  • Use automated systems in an equitable way

  • Include proactive equality assessments as part of the system design

  • Ensure accessibility for people with disabilities throughout all stages of the product lifecycle

  • Deploy independent evaluation and explain using plain language in the form of an algorithmic impact assessment, which should be made public where possible.

The Blueprint provides some examples of discriminatory automated systems:

  • A credit screening algorithm that discriminated against loan applicants from historically black colleges and universities

  • An HR algorithm that downgraded resumes containing the word “women’s” (e.g. “women’s chess club captain”)

  • A model designed to predict whether students would drop out of school disproportionately identified Black students, resulting in those students being steered away from taking certain qualifications

Principle 3: Data Privacy

The principle of “data privacy” suggests that:

  • Automated systems should implement privacy protections by design and enable them by default 

  • Only personal data that is “strictly necessary” for the specific context of the system should be collected or used

  • Designers, developers and deployers of automated systems should seek, “to the greatest extent possible” permission and respect decisions before collecting, using, transferring or deleting personal data

  • Where seeking permission to use data is not possible, alternative privacy by design methods should be used

  • Systems should not “obfuscate user choice or burden users with defaults that are privacy-invasive”

  • Consent requests should be brief, granular and easily understood, and should only be used to justify data collection if it can be “appropriately and meaningfully given”

  • Continuous surveillance and monitoring should not be deployed in certain domains, such as education, work or housing

  • Wherever possible, automated systems should provide access to reporting that confirms people’s choices about their data and provide an algorithmic impact assessment

  • Extra privacy protections should be employed in “sensitive domains”, such as health, employment, education, criminal justice and personal finance

The Blueprint provides some general examples of where data privacy might be an issue in automated systems:

  • An insurer collecting data from social media when deciding on life insurance rates

  • A data broker harvesting vast amounts of data and then suffering a data breach

  • A local housing association installing facial recognition to enable access to property and then disclosing data to the police

Principle 4: Notice and Explanation

Some of the practical implications of the “notice and explanation” principle include:

  • Designers, developers and deployers of automated systems should provide generally accessible, plain language documentation explaining:

    • The overall system functioning

    • The role of automation in the system

    • The fact that automated systems are in use

    • The individual or organisation responsible for the system

    • The system’s possible outcomes

  • Notices should be kept up-to-date

  • Summary reports about how automated systems function should be publicly available wherever possible

The Blueprint provides some examples of where the “notice and explanation” principle has not been properly implemented:

  • A lawyer representing elderly people was not told (until they went to court) that a new AI system was the reason a client had been cut off from a healthcare programme

  • A parent was subject to a child welfare investigation as the result of an AI system—but hadn’t been informed that their data was being collected for this purpose

  • A system for providing benefits changed its criteria without notice and started rejecting recipients based on system errors—without explanation

Principle 5: Human Alternatives, Consideration, and Fallback

The principle of “human alternatives, consideration and fallback” states that:

  • Automated systems should enable people to opt out, wherever appropriate, and provide access to human intervention

  • “Appropriateness” should be based on people’s reasonable expectations in a given context with a focus on protection from harm

  • Automated systems should include “fallback and escalation” process if the system fails, produces an error, or the subject of automated decision-making wishes to appeal

  • Access to human alternatives and fallback processes should be timely, effective, maintained, not unreasonably burdensome

  • Automated systems in sensitive domains should be accompanied by human consideration in high-risk situations

  • Reports on human governance of automated systems should be made publicly available wherever possible

The Blueprint provides some examples of where human alternatives, consideration and fallback have not been properly implemented:

  • An automated employment benefits system required applicants to use a smartphone and did not provide a human alternative, resulting in people without smartphones being denied benefits

  • A fraud detection system incorrectly flagged insurance claims as fraudulant without providing a human intervention

  • A corporation performed automated HR processes using an automated system, resulting in some workers being fired without the opportunity to appeal to a human

Too Little, Too Late?

As the public becomes increasingly concerned about the impacts of automated systems—no-one wants to get fired or discriminated against by a robot—governments worldwide are keen to act to rein in some of the more harmful AI systems and use cases.

The UK and the EU already offer some protections against automated decision-making via the General Data Protection Regulation (GDPR). 

While the EU is set to build on the GDPR’s rules via the upcoming AI Act, the UK is considering rowing back these protections—and has proposed a “light touch” principles-based approach for regulating AI.

But whatever the regulatory developments, it’s clear that good AI governance is an increasingly important part of organisations’ ethics and compliance programmes—and voluntarily adopting principles such as those espoused in the Blueprint for an AI Bill of Rights is just a good idea.