New York Responsible AI Safety and Education Act (RAISE Act, A 9449)
Effective
Overview
Requires developers of frontier AI models operating in New York to implement safety protocols, conduct impact assessments, and provide transparency disclosures. Modeled in part on California SB 53 with additional education and public awareness duties. Signed into law in late 2025. Takes effect January 1, 2027.
This is an AI-specific state law.
Who this applies to
This regulation applies to companies that build, develop, or sell AI tools, models, or systems. If your company creates AI products that other businesses or consumers use, this regulation may apply to you.
AI categories covered
- General purpose AI
What this requires you to do
Risk management program required
Implement a risk management program. Maintain ongoing processes to identify, assess, and mitigate AI-related risks.
Impact assessment required
Complete an impact assessment. Document the potential risks and effects of your AI system on affected people.
Transparency notice required
Provide transparency notices. Inform affected individuals that AI is being used and how it influences decisions.
Record-keeping required
Maintain records. Keep documentation of your AI systems, decisions made, and compliance activities.
Enforcement and penalties
Enforced by New York AG. Penalties under New York consumer protection law.
Source
Read the full text
https://www.nysenate.gov/legislation/bills/2025/A9449
Always verify current language and amendments at the official source.
Other New York regulations
Explore more rules in the same jurisdiction that may apply to your AI systems.
Want to know what else applies to your company?
Run a free XIRA scan to see all regulations that match your states and AI tools.