CODY KELLER

The AI governance conversation has been running in the background for most organizations — something to monitor, something to address eventually, something for legal to sort out. That posture has an expiration date, and for many businesses, it’s August 2026.

The EU AI Act’s major provisions go fully into effect on August 2, 2026. Organizations non-compliant with the EU AI Act face penalties reaching up to 35 million euros or 7% of global annual turnover, whichever is higher. Metricstream For context, that’s a penalty structure designed to sting at the enterprise level — not a regulatory slap on the wrist.

If your organization touches EU markets in any capacity and you haven’t started your AI compliance work, you are already behind.

The “It Doesn’t Apply to Us” Assumption Is Probably Wrong

The most common mistake US-based organizations make with the EU AI Act is assuming it doesn’t apply because they’re not headquartered in Europe.

Just like GDPR, the EU AI Act applies to organizations based in the US and other non-EU countries if AI systems or outputs are used within the EU. This includes AI services hosted in the US but accessible to EU users, and systems where automated outputs are used in the EU. Bond, Schoeneck & King

If your organization uses AI to screen candidates, score credit applications, make hiring recommendations, triage customer service requests, or support clinical decisions — and any of that activity touches EU residents — you are in scope. The test isn’t where your servers are. It’s where the output lands.

For US companies where EU revenue is a significant portion of total revenue, the prospect of being blocked from selling to 450 million EU consumers creates a stronger compliance incentive than the penalty calculations alone. Digital Applied

The Deadline Timeline Is Already In Motion

The EU AI Act entered into force on August 1, 2024. Prohibited AI practices and AI literacy obligations applied from February 2, 2025. Rules for general-purpose AI models became applicable on August 2, 2025. The majority of remaining provisions — including those governing high-risk AI systems — become fully applicable on August 2, 2026. European Commission

This isn’t a future deadline. Two of the four major milestones have already passed. If your organization has been in a “watch and wait” posture, you’ve been in violation of some provisions for over a year without knowing it.

Technical documentation packages for complex AI systems typically take three to six months to prepare properly. Companies that begin in Q3 or Q4 of 2026 will not have enough time to complete conformity assessments before enforcement. Digital Applied That window is now.

The US Regulatory Landscape Isn’t Waiting Either

The EU AI Act gets most of the attention, but US-based organizations are navigating a rapidly expanding patchwork of domestic requirements simultaneously.

Colorado’s AI Act became enforceable in February 2026, targeting high-risk automated decision-making systems affecting education, employment, financial services, healthcare, housing, insurance, and legal services. California has 18 different AI bills now signed into law. The Trump administration’s Executive Order 14179 signals a federal preference for innovation over regulation — but does nothing to preempt state-level requirements, which continue to proliferate. Smith Anderson

The result is a multi-jurisdictional compliance problem with no unified federal baseline to anchor it. Organizations operating across multiple US states while also serving EU markets are managing overlapping, sometimes conflicting requirements with different definitions of what “AI” even means.

In 2024 alone, US federal agencies introduced 59 AI-related regulations — more than double the year before — while legislative mentions of AI rose across 75 countries. Credo The direction of travel is clear regardless of which political winds are blowing federally.

What GRC Professionals Need to Do Right Now

This is fundamentally a GRC problem before it’s a technology problem. You can’t govern what you haven’t inventoried, and most organizations haven’t inventoried their AI systems with regulatory compliance in mind.

Here’s where to start:

  • Build your AI system inventory. Document every AI tool in use across the organization — not just what IT deployed, but what business units are using independently. SaaS tools with AI features count. Vendor-embedded AI counts. Shadow AI is your biggest gap.
  • Classify by risk tier. The EU AI Act uses four tiers: unacceptable risk (prohibited), high risk (requires conformity assessment), limited risk (transparency obligations), and minimal risk (no specific obligations). Your inventory needs a risk classification against this framework.
  • Check for prohibited practices immediately. Certain AI uses have been banned since February 2025. If any of your systems use social scoring, certain biometric identification methods, or subliminal manipulation techniques, those need to stop — not by August, but now.
  • Assign ownership. AI governance without a named owner doesn’t get done. Someone needs accountability for the compliance program, with authority to engage legal, IT, and business unit leaders.
  • Map to NIST AI RMF. The NIST AI Risk Management Framework has evolved into the de facto US standard for AI governance Sombra and maps reasonably well to EU AI Act requirements. Building your internal framework around NIST RMF gives you a foundation that works across jurisdictions.

The organizations that will struggle most in 2026 and beyond are those that treated AI governance as a legal checkbox rather than an operational discipline. The ones that are building inventory, assigning ownership, and running risk classifications now are the ones that will have options when enforcement activity escalates — and it will escalate.


Discussion Questions

  1. Does your organization have a complete inventory of AI systems in use — including vendor-embedded AI and business-unit-deployed SaaS tools? If not, what’s blocking that work?
  2. If your organization touches EU markets, have you completed a risk classification of your AI systems under the EU AI Act’s four-tier framework? Do you know which of your systems would be considered high-risk?
  3. Who in your organization owns AI governance? If the answer is unclear, what does that tell you about your current compliance posture?

Further Reading


Leave a Reply

Your email address will not be published. Required fields are marked *