Executive Order 14281: Restoring Equality of Opportunity and Meritocracy
Effective date
Penalty
No direct private penalties — executive order changes federal enforcement posture, not private obligations. Strategic risk: private Title VII litigation and…
Obligations mapped
4 obligations
Overview
Directs federal agencies to deprioritize disparate-impact enforcement across civil rights statutes (Title VII, Title VI, ECOA, Fair Housing Act). Affects AI-driven hiring, lending, housing, insurance decisions. AG directed to assess and potentially preempt state laws imposing disparate-impact liability (Section 7(a)). Companies remain exposed to private Title VII litigation and state AI laws (CO AI Act, IL HRA AI, NYC LL 144) that codify disparate-impact standards.
This is federal enforcement guidance.
See if this regulation applies to your company with the free exposure scan.
Who this applies to
This regulation applies to the following roles:
- Developers of covered AI systems
- Deployers and users of covered AI systems
- United States federal law
This regulation applies to both companies that build AI products and companies that use AI tools from other vendors.
EO 14281
AI categories covered
- Employment and hiring
- Consumer-facing AI
- Healthcare AI
- Financial services AI
- General purpose AI
What this requires you to do
4 obligations identified from statutory analysis.
Sections 5(b)(ii), 7(a)
Implicit from Sections 4, 7; derived from practitioner consensus
Sections 2, 4
Section 7(a); supplemented by EO 14365 Sections 5, 6, 7, 8
Regulation summaries are simplified for readability and may not capture every nuance of the underlying statute. Verify important details against primary sources linked on this page.
Enforcement and penalties
No direct private penalties — executive order changes federal enforcement posture, not private obligations. Strategic risk: private Title VII litigation and state AI law enforcement remain active.
Penalty amounts are based on statutory text and may be subject to adjustment, judicial interpretation, or enforcement discretion.
Legislative history
signed
EO 14365 signed extending preemption framework with AI Litigation Task Force + Commerce Department state-law evaluation
guidance issued
EEOC issued Disparate Impact Rule directive closing pending disparate-impact-only charges by Sep 30, 2025
signed
EO 14281 signed by President Trump directing federal agencies to deprioritize disparate-impact enforcement
Related regulations
- In EffectFederal
EEOC Guidance on AI in Employment Selection
EEOC technical assistance documents explain how existing Title VII and ADA obligations apply to AI and algorithmic employment tools. Not binding regulation, but signals enforcement priorities. Employers are liable for adverse impact from AI tools even when tools are designed by third-party vendors. Requires adverse impact analysis per UGESP four-fifths rule. ADA prohibits AI tools that screen out individuals with disabilities or make pre-offer disability inquiries.
Effective
- In EffectFederal
FTC Enforcement Policy on AI and Algorithmic Fairness
FTC enforces Section 5 of the FTC Act against deceptive and unfair AI practices. Key areas: unsubstantiated AI marketing claims, AI products harmful to children, discriminatory AI outcomes, and deceptive AI-powered services. Operation AI Comply (September 2024) targeted five companies simultaneously. Algorithmic disgorgement remedy requires deletion of AI models trained on improperly collected data. Administration change in 2025 narrowed speculative risk enforcement but maintained fraud and misrepresentation focus.
Effective
- In EffectFederal
Executive Order 14110 on AI (Revoked)
Established federal policy priorities for AI safety, security, and rights protections across agencies. Directed agencies to issue additional standards, procurement rules, and risk controls. Revoked by Executive Order 14148 on January 20, 2025. Listed for historical reference. Key provisions revoked include NIST AI safety testing requirements, reporting requirements for dual-use foundation models, and watermarking mandates. However, NIST work products developed under EO 14110 (AI RMF, GenAI Profile) persist as voluntary frameworks.
Effective
- In EffectFederal
DOJ AI Litigation Task Force
Coordinates federal civil litigation strategy on AI-related matters across the Department of Justice. Executive orders cannot preempt state law. Only Congress or courts can do that. Task Force is authorized to file lawsuits challenging state laws but as of April 2026 has NOT filed any. Congress rejected federal preemption twice: Senate vote 99-1 in July 2025, preemption language also dropped from NDAA in December 2025.
Effective
- In EffectFederal Guidance
SEC AI Guidance in Financial Services
SEC enforces existing fiduciary duties and disclosure requirements as applied to AI. Pursuing AI washing enforcement against companies overstating AI capabilities in securities filings. Proposed rule on predictive data analytics (2023-17958) unlikely to be finalized under current administration.
Effective
- In EffectFederal Guidance
FDA AI/ML Medical Device Framework
FDA requires pre-market review (510(k), De Novo, PMA) for AI/ML-based software that meets the definition of a medical device. Over 1,000 AI/ML-enabled devices authorized as of 2025. Includes predetermined change control plan for adaptive AI/ML devices. Most mature federal AI regulatory framework. Sector-specific. Has been operating for years.
Effective
- In EffectFederal Guidance
HUD AI Guidance in Housing
Fair Housing Act disparate impact standard applies to AI-driven tenant screening, lending algorithms, and property valuations. HUD 2023 disparate impact rule (reinstated) allows challenges to facially neutral AI practices with discriminatory effects. Meta 2022 settlement over AI ad targeting in housing is a key precedent. Disparate impact rule status under Trump administration should be monitored.
Effective
- In EffectFederal Guidance
DOL AI in Workplace Guidance
Non-binding principles for AI in the workplace covering transparency, human oversight, informed consent, data protection, non-discrimination, worker voice, and compliance with existing labor law. Issued under Biden DOL. Status under Trump administration uncertain. However, underlying labor law obligations persist.
Effective
- UpcomingFederal
TAKE IT DOWN Act (S. 146)
Requires covered online platforms to remove reported nonconsensual intimate imagery, including AI-generated deepfakes, within a short deadline after a valid notice. Dual effective dates: criminal provisions effective May 19, 2025 (date signed into law). Platform compliance deadline: May 19, 2026 (one year after signing). First federal law limiting the use of AI in ways harmful to individuals. Covers both authentic NCII and AI-generated deepfakes. Does not preempt state laws. FTC jurisdiction extended to nonprofit entities. First and only enacted federal AI-specific law signed by the Trump administration. Bipartisan 409-2 House vote, unanimous Senate passage.
Effective
- In EffectFramework
NIST AI Risk Management Framework (AI RMF 1.0)
NIST AI RMF is a voluntary framework used as a practical benchmark by regulators and lawmakers. NIST released AI RMF 2.0 in February 2024, building on early adoption experiences and adapting to generative AI paradigms. Companion documents include the AI RMF Playbook and Generative AI Profile (NIST AI 600-1), developed under EO 14110, which persists as a voluntary framework even though EO 14110 was revoked. State laws that reference NIST as a safe harbor or affirmative defense include Texas TRAIGA (HB 149), Tennessee TIPA, and Montana Right to Compute Act (SB 212). Colorado SB24-205 NIST-aligned controls remain useful historical and reusable governance evidence after SB26-189, but they should not be described as the current Colorado ADMT minimum-law safe harbor without legal review. Alignment with NIST AI RMF increasingly affects legal exposure under these state laws.
Effective
Regulation summaries are simplified for readability and may not capture every nuance of the underlying statute. Verify important details against primary sources linked on this page.