High-Risk AI Systems
High-risk AI systems require extra scrutiny during the procurement process.
High Risk Systems
AI systems that negatively affect the safety and/or the fundamental rights of a person are considered high risk.
Healthcare
Services, devices, workforce planning, supply distribution, data privacy, etc.
Financial
Credit worthiness, background checks, interest rate determinations, etc.
Education
Acceptance criteria, curricula development, student monitoring, data privacy, etc.
Housing
Quality of living, eligibility, background checking, rent escalation determination, etc.
Employment
Hiring, pay determination, succession planning, disciplinary action, etc.
Government Benefits
Welfare eligibility, Medicare benefits determination, head start eligibility, etc.
Law Enforcement
Facial recognition, license plate readers, etc.
Public Services
Service area level of service and staffing determination, equipment determination, etc.
Biometric
Data privacy protection of sensitive personal data (e.g., retina scan, finger prints, voice print, etc.)
Justice / Legal
Algorithmic sentencing, recidivism determinations, legal defense generation, etc.
Utilities
Algorithmic regulation of harmful agents without proper safety features, etc.
Safety Components
Autonomous operation of AI-powered machinery without proper safety features, etc.
Immigration
Facial recognition, GPS tracking, eligibility determinations, etc.
Critical Infrastructure
Service area level of service and staffing determination, equipment determination, etc.
Potential Harms from High-Risk Systems
High risk systems have the potential to impact a person's safety and/or fundamental human rights and dignity. While these systems are capable of producing great advantages in terms of efficiency gains and consistency in decision-making output, high risk systems also have the potential to impact a person's safety and/or fundamental human rights and dignity. Consequently, AI researchers have identified a variety of harms that require our attention when deploying such systems. Unfortunately, many high-risk systems incorporate systemic bias that can scale harms through algorithmic computation in ways that one biased human acting as a single individual can't. Without proper governance and risk mitigation practices, this computational power can create rights-impacting and other life-altering outcomes for humans as noted below.
Health / Physical
-
Injury / death
-
Exposure to unhealthy agents
-
AI-facilitated violence
-
Medical misdiagnosis by AI
-
Overreliance on AI safety monitors / features
-
Inadequate fail-safes
-
AI error / damage of critical infrastructure
Economic Loss
-
Reduction in benefits due to proxy variables
-
Interest rate discrimination
-
Price discrimination
-
Devaluation of individual occupation(s)
-
Skills atrophy/degradation of human skills over AI skills resulting in lower wages
-
Job simplification resulting in lower wages
Emotional / Psych
-
Discrimination
-
Overreliance on automation
-
Loss of autonomy
-
Loss of agency
-
Sentiment analysis
-
Distortion of reality
-
Identity misclassification
-
Attention hijacking
-
AI as a human agent to carry out human-to-AI interactions in high-risk domains
Loss of Privacy
-
Unauthorized monitoring
-
False accusations
-
Misattribution
-
Social control
-
Social scoring
-
Homogeneity
-
Loss of effective remedy
Loss of Opportunity
-
Medical / healthcare
-
Employment
-
Housing
-
Education
-
Credit
-
Insurance
-
Benefits
-
Utilities
-
Critical services
Loss of Liberty
-
Privacy violation
-
Loss of dignity
-
Loss of anonymity
-
Required participation / forced association
-
Disproportionality of information retained in a permanent record