The US Regulatory Landscape for Cognitive Systems

The regulatory environment governing cognitive systems in the United States is fragmented across federal agencies, sector-specific statutes, and emerging executive actions, reflecting the absence of a single omnibus AI law comparable to the European Union's AI Act. This page maps the principal regulatory bodies, applicable legal frameworks, classification schemes, and decision points that determine how cognitive and AI-based systems are governed across healthcare, finance, transportation, and other critical sectors. The stakes are material: enforcement actions under existing statutes — including the Federal Trade Commission Act and the Equal Credit Opportunity Act — already reach cognitive system deployments, even without AI-specific legislation.


Definition and scope

For regulatory purposes, cognitive systems occupy an ill-defined but consequential space. The National Institute of Standards and Technology (NIST) defines artificial intelligence broadly in the AI Risk Management Framework (AI RMF 1.0, January 2023) as machine-based systems that can make predictions, recommendations, or decisions for real or virtual environments. Cognitive systems — encompassing machine learning models, natural language processing pipelines, reasoning engines, and decision-support architectures — fall squarely within that definition when deployed in consequential contexts.

Regulatory scope hinges on three axes:

  1. Sector of deployment — Healthcare systems are subject to FDA oversight; credit-scoring systems fall under the Consumer Financial Protection Bureau (CFPB) and Federal Reserve; autonomous vehicle perception systems face NHTSA jurisdiction.
  2. Consequentiality of output — Systems that produce "consequential decisions" affecting housing, employment, credit, or public safety attract heightened scrutiny under existing civil rights and consumer protection statutes.
  3. Data handling — Systems that process protected health information trigger HIPAA; those collecting data from children under 13 trigger COPPA enforcement by the FTC.

This three-axis framework is not codified in a single statute but is reconstructed from agency guidance documents, consent decrees, and enforcement actions across multiple federal bodies. The broader landscape of cognitive systems standards and frameworks provides parallel treatment of voluntary technical standards that often inform regulatory interpretations.


How it works

The US regulatory model for cognitive systems operates through layered jurisdiction rather than unified licensing. Four primary mechanisms shape compliance obligations:

  1. Existing statute application — Agencies apply pre-AI statutes to cognitive system outputs. The FTC has invoked Section 5 of the FTC Act (15 U.S.C. § 45) against deceptive or unfair algorithmic practices. The Equal Employment Opportunity Commission (EEOC) applies Title VII of the Civil Rights Act of 1964 to AI-assisted hiring tools that produce disparate impact.

  2. Sector-specific agency rulemaking — The FDA's 2021 action plan for AI/ML-based Software as a Medical Device (SaMD) created a predetermined change control plan (PCCP) pathway. The CFPB issued guidance in 2022 asserting that adverse action notices under the Fair Credit Reporting Act (FCRA) must explain algorithmic credit decisions in specific, not generic, terms.

  3. Executive Orders and presidential directives — Executive Order 14110 (October 2023), "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," directed more than a dozen federal agencies to issue guidance, conduct risk assessments, and report on AI use within specified timeframes. The NIST AI RMF was explicitly referenced as a foundational voluntary standard.

  4. State-level regulation — Illinois enacted the Artificial Intelligence Video Interview Act (820 ILCS 42) requiring employer disclosure when AI analyzes video interviews. Colorado's SB 21-169 restricts insurer use of external data sources that function as proxies for protected classes. At least 5 states had enacted AI-specific legislation affecting employment or insurance by the end of 2023, with dozens of bills pending in additional state legislatures.

Explainability requirements and ethical deployment standards are increasingly embedded in agency guidance and, in some sectors, binding rulemaking.


Common scenarios

Regulatory exposure for cognitive systems clusters in four recurring deployment contexts:


Decision boundaries

Determining which regulatory regime applies requires resolving three classification questions:

Software as a Medical Device vs. Clinical Decision Support — FDA's 2022 Clinical Decision Support guidance distinguishes regulated SaMD from non-device CDS using a four-factor test centered on whether clinician review is integral to the decision. Systems relying on reasoning and inference engines to produce autonomous diagnostic outputs are more likely to be classified as regulated devices.

Automated decision-making vs. human-in-the-loop — CFPB and EEOC guidance both distinguish between systems that make final determinations and those that surface scored recommendations for human review. The former carry heavier explainability and audit obligations. This boundary directly intersects with human-cognitive system interaction design choices.

Consumer-facing vs. internal operational systems — FTC enforcement has primarily targeted consumer-facing systems, while internal procurement, logistics, or fraud-detection systems face lighter direct regulatory pressure, though employment law and data protection obligations still apply.

The complete reference index for cognitive systems topics, including foundational definitions and sector-specific applications, is maintained at the Cognitive Systems Authority index.


References