The US Regulatory Landscape for Cognitive Systems
The regulatory environment governing cognitive systems in the United States is fragmented across federal agencies, sector-specific statutes, and emerging executive actions, reflecting the absence of a single omnibus AI law comparable to the European Union's AI Act. This page maps the principal regulatory bodies, applicable legal frameworks, classification schemes, and decision points that determine how cognitive and AI-based systems are governed across healthcare, finance, transportation, and other critical sectors. The stakes are material: enforcement actions under existing statutes — including the Federal Trade Commission Act and the Equal Credit Opportunity Act — already reach cognitive system deployments, even without AI-specific legislation.
Definition and scope
For regulatory purposes, cognitive systems occupy an ill-defined but consequential space. The National Institute of Standards and Technology (NIST) defines artificial intelligence broadly in the AI Risk Management Framework (AI RMF 1.0, January 2023) as machine-based systems that can make predictions, recommendations, or decisions for real or virtual environments. Cognitive systems — encompassing machine learning models, natural language processing pipelines, reasoning engines, and decision-support architectures — fall squarely within that definition when deployed in consequential contexts.
Regulatory scope hinges on three axes:
- Sector of deployment — Healthcare systems are subject to FDA oversight; credit-scoring systems fall under the Consumer Financial Protection Bureau (CFPB) and Federal Reserve; autonomous vehicle perception systems face NHTSA jurisdiction.
- Consequentiality of output — Systems that produce "consequential decisions" affecting housing, employment, credit, or public safety attract heightened scrutiny under existing civil rights and consumer protection statutes.
- Data handling — Systems that process protected health information trigger HIPAA; those collecting data from children under 13 trigger COPPA enforcement by the FTC.
This three-axis framework is not codified in a single statute but is reconstructed from agency guidance documents, consent decrees, and enforcement actions across multiple federal bodies. The broader landscape of cognitive systems standards and frameworks provides parallel treatment of voluntary technical standards that often inform regulatory interpretations.
How it works
The US regulatory model for cognitive systems operates through layered jurisdiction rather than unified licensing. Four primary mechanisms shape compliance obligations:
-
Existing statute application — Agencies apply pre-AI statutes to cognitive system outputs. The FTC has invoked Section 5 of the FTC Act (15 U.S.C. § 45) against deceptive or unfair algorithmic practices. The Equal Employment Opportunity Commission (EEOC) applies Title VII of the Civil Rights Act of 1964 to AI-assisted hiring tools that produce disparate impact.
-
Sector-specific agency rulemaking — The FDA's 2021 action plan for AI/ML-based Software as a Medical Device (SaMD) created a predetermined change control plan (PCCP) pathway. The CFPB issued guidance in 2022 asserting that adverse action notices under the Fair Credit Reporting Act (FCRA) must explain algorithmic credit decisions in specific, not generic, terms.
-
Executive Orders and presidential directives — Executive Order 14110 (October 2023), "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," directed more than a dozen federal agencies to issue guidance, conduct risk assessments, and report on AI use within specified timeframes. The NIST AI RMF was explicitly referenced as a foundational voluntary standard.
-
State-level regulation — Illinois enacted the Artificial Intelligence Video Interview Act (820 ILCS 42) requiring employer disclosure when AI analyzes video interviews. Colorado's SB 21-169 restricts insurer use of external data sources that function as proxies for protected classes. At least 5 states had enacted AI-specific legislation affecting employment or insurance by the end of 2023, with dozens of bills pending in additional state legislatures.
Explainability requirements and ethical deployment standards are increasingly embedded in agency guidance and, in some sectors, binding rulemaking.
Common scenarios
Regulatory exposure for cognitive systems clusters in four recurring deployment contexts:
-
Healthcare diagnostics — An AI model assisting radiologists in flagging anomalies qualifies as a medical device under 21 U.S.C. § 321(h) if it meets the intended use criteria. FDA clearance via the 510(k) pathway or De Novo classification is required before commercial deployment. As of 2023, FDA had authorized more than 520 AI/ML-enabled medical devices (FDA AI/ML Action Plan).
-
Consumer credit decisions — Automated underwriting systems using machine learning must produce adverse action notices that satisfy FCRA § 615(a) and ECOA/Reg B specificity requirements. The CFPB's 2022 circular explicitly rejected the use of "complex algorithm" as a permissible reason code.
-
Hiring and workforce management — AI resume-screening and scheduling tools face EEOC enforcement if outputs correlate with race, sex, or national origin at rates satisfying the 4/5ths (80%) rule under the Uniform Guidelines on Employee Selection Procedures (29 C.F.R. Part 1607).
-
Critical infrastructure and national security — Systems embedded in energy grids, water systems, or financial market infrastructure face oversight from sector-specific regulators (FERC, CISA, SEC) and may trigger mandatory incident reporting under the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA).
Decision boundaries
Determining which regulatory regime applies requires resolving three classification questions:
Software as a Medical Device vs. Clinical Decision Support — FDA's 2022 Clinical Decision Support guidance distinguishes regulated SaMD from non-device CDS using a four-factor test centered on whether clinician review is integral to the decision. Systems relying on reasoning and inference engines to produce autonomous diagnostic outputs are more likely to be classified as regulated devices.
Automated decision-making vs. human-in-the-loop — CFPB and EEOC guidance both distinguish between systems that make final determinations and those that surface scored recommendations for human review. The former carry heavier explainability and audit obligations. This boundary directly intersects with human-cognitive system interaction design choices.
Consumer-facing vs. internal operational systems — FTC enforcement has primarily targeted consumer-facing systems, while internal procurement, logistics, or fraud-detection systems face lighter direct regulatory pressure, though employment law and data protection obligations still apply.
The complete reference index for cognitive systems topics, including foundational definitions and sector-specific applications, is maintained at the Cognitive Systems Authority index.
References
- NIST AI Risk Management Framework (AI RMF 1.0) — National Institute of Standards and Technology, January 2023
- FDA Artificial Intelligence and Machine Learning in Software as a Medical Device — U.S. Food and Drug Administration
- Executive Order 14110 on Safe, Secure, and Trustworthy AI — White House, October 2023
- CFPB Circular 2022-03: Adverse Action Notification Requirements — Consumer Financial Protection Bureau
- EEOC Uniform Guidelines on Employee Selection Procedures, 29 C.F.R. Part 1607 — Equal Employment Opportunity Commission
- FTC Act Section 5, 15 U.S.C. § 45 — Federal Trade Commission
- NIST AI Resource Center — National Institute of Standards and Technology