California Finalizes CCPA Risk Assessment and AI Regulations

California just did what California does best: it dropped another giant rulebook on the tech and data world.
This time, it’s not just about cookies or privacy notices – it’s about how companies use artificial intelligence,
automated decision-making tools, and high-risk data practices, and how they prove those uses are actually responsible.

After years of drafts, public meetings, and more acronyms than anyone asked for, the California Privacy Protection
Agency (CPPA) has finalized major regulations under the California Consumer Privacy Act (CCPA). These rules cover
automated decision-making technology (ADMT), detailed privacy risk assessments, and mandatory cybersecurity audits.
They were approved by the Office of Administrative Law on September 23, 2025, and begin taking effect on
January 1, 2026, with phased compliance dates that stretch into the next few years.

If your organization touches California consumer data (and many do, whether they like it or not), these CCPA AI
regulations are about to reshape how you build products, run analytics, deploy AI, and document risk. Let’s break
down what’s actually in these rules, what counts as “automated decision-making,” and what businesses need to do now
to avoid turning their compliance program into a full-time fire drill.

How We Got Here: From CCPA to Full-Blown AI Governance

The original CCPA, passed in 2018, focused on giving consumers transparency and control over how businesses collect,
sell, and share their data. Then came the California Privacy Rights Act (CPRA), which amended the CCPA, created the
CPPA, and explicitly authorized regulations on automated decision-making, risk assessments, and cybersecurity audits.

The CPPA spent years drafting and revising these rules. On July 24, 2025, the agency adopted a comprehensive package
covering:

  • Automated decision-making technology (ADMT)
  • Privacy risk assessments for high-risk processing
  • Mandatory cybersecurity audits for certain businesses
  • Updates to existing CCPA rules, including opt-out signals and service provider obligations

The package was then approved by the Office of Administrative Law on September 23, 2025, making the rules final.

The big picture: California now has the first broadly scoped, consumer-facing AI governance framework in the United
States, with rules that will influence how AI and data governance are implemented far beyond state lines.

What the New CCPA AI and Risk Assessment Rules Actually Do

Defining Automated Decision-Making Technology (ADMT)

At the heart of the regulations is a deceptively simple question: what counts as automated decision-making technology?

The final CCPA regulations define ADMT as technology that processes personal information and uses computation to
replace or substantially replace human decision-making. If a business relies on the tool’s output to make
a decision without meaningful human involvement, that’s ADMT.

ADMT includes “profiling,” such as tools that evaluate a person’s behavior, interests, or location to predict or
influence outcomes. Not all AI is ADMT, but a lot of practical, real-world AI use cases fall squarely into this
bucket – especially when those tools are used to make important calls about people’s lives.

Under the final rules, businesses are generally subject to ADMT obligations when the technology is used to make
“significant decisions,” including those affecting access to:

  • Employment opportunities and promotions
  • Financial services and credit
  • Housing
  • Healthcare and insurance
  • Education and key services

Some lower-risk uses such as certain first-party advertising or basic analytics were narrowed or removed from the
most rigorous requirements during the rulemaking process.

Risk Assessments: When and Why They’re Required

The new regulations create a formal privacy risk assessment requirement for certain types of processing that present
a “significant risk” to consumer privacy or data security. Businesses must perform these risk assessments not just
once, but on an ongoing basis especially before starting a new high-risk activity.

Examples of processing activities that may trigger a CCPA risk assessment include:

  • Using ADMT to make significant decisions about consumers
  • Large-scale profiling or behavioral tracking
  • Processing sensitive personal information, such as health or precise location data
  • Data uses that pose heightened risks to children and teens

The regulations require businesses to document, at a minimum: the purpose of the processing, categories of data,
collection methods, retention periods, number of consumers affected, safeguards in place, and the potential
benefits and risks to consumers and to the business.

Starting January 1, 2026, covered businesses must conduct risk assessments for qualifying activities and eventually
submit attestations and summaries to the CPPA. Some reporting obligations will phase in later, with certain deadlines
currently set as far out as 2028.

Cybersecurity Audits: Proving You’re Not a Soft Target

On top of AI-related obligations, the regulations introduce mandatory cybersecurity audits for businesses that meet
specific thresholds (for example, based on revenue or volume of sensitive personal information processed). These
annual audits are designed to evaluate whether a business maintains “reasonable security” appropriate to the nature
of the data and the risks.

Audits must be conducted by qualified, independent professionals and must address:

  • Technical safeguards (access controls, encryption, vulnerability management)
  • Organizational measures (policies, training, incident response)
  • Risk management processes and remediation of identified issues

Businesses subject to the audit requirements must certify completion to the CPPA and may need to address identified
gaps as part of their ongoing compliance program. For companies that have treated “security” as something handled by
whoever remembers the Wi-Fi password, this represents a major cultural shift.

New Rights for Consumers Around AI and Automated Decisions

One of the most impactful parts of the new regulations is the expansion of consumer rights when ADMT is used to make
significant decisions.

Under the finalized rules, businesses must generally provide:

Pre-Use Notices

Before using ADMT in certain contexts, businesses must provide clear, understandable notices that explain:

  • That ADMT will be used to make (or substantially influence) a decision
  • The types of decisions being made (for example, hiring, credit approval, or insurance eligibility)
  • The categories of personal information used by the system
  • How consumers can exercise their rights, including the right to opt out where applicable

These disclosures aim to move AI from the shadows into something consumers can actually see and react to.

Access and “Logic” Disclosures

Consumers gain new rights to access information about how ADMT affects them. When a covered business uses ADMT for
significant decisions, consumers may be entitled to:

  • A description of the decision-making process in reasonably understandable terms
  • Key factors or data elements that materially influenced the outcome
  • Information about how to contest or appeal a decision, if such mechanisms exist

The regulations don’t require companies to hand over source code, but they do require meaningful explanations instead
of “a mysterious algorithm did it.”

Opt-Out Rights for Certain AI Uses

In specific scenarios, consumers may have the right to opt out of the use of ADMT for significant decisions.
Businesses will need to:

  • Offer user-friendly ways to exercise opt-out rights (online and, when appropriate, offline)
  • Honor opt-out requests within required time frames
  • Ensure that opt-out preferences are reflected across relevant systems and vendors

To make this work at scale, many organizations will need new workflows, governance processes, and technical
integrations – especially if they’ve historically plugged AI tools directly into decision flows without much
transparency.

Who’s in Scope – and Who Gets a Breather?

The new rules primarily apply to businesses that are subject to the CCPA and that engage in certain categories of
high-risk processing or use ADMT for significant decisions.

In employment, for example, mid- to large-sized California employers that use automated tools for hiring, promotion,
or other key HR decisions may be covered. These employers must complete risk assessments, provide pre-use notices,
and honor opt-out or access rights where applicable.

At the same time, the CPPA scaled back earlier drafts that would have pulled in more routine advertising and analytics
activities. Under the revised rules, obligations generally kick in when ADMT is used in higher-stakes decision-making,
such as finance, healthcare, employment, insurance, and education. Lower-risk use cases may fall outside ADMT-specific
duties, though they can still be subject to other parts of the CCPA.

The big lesson: do not assume you’re out of scope just because you don’t “sell” data in the traditional sense. If your
product or operations rely on AI-driven scoring, ranking, or eligibility decisions involving Californians, you probably
need to take a closer look.

Practical Steps Businesses Should Take Now

These regulations aren’t a “flip a switch on New Year’s Day” situation. They require a methodical, multi-year approach.
Here’s how many organizations are starting to get ready.

1. Map Your AI and High-Risk Processing

You can’t comply with AI and risk assessment rules if you don’t know where your AI actually lives. Start by:

  • Creating an inventory of ADMT and AI tools used across the organization
  • Documenting what decisions they influence (e.g., hiring, underwriting, fraud detection)
  • Linking each use case to the categories of personal information involved

This mapping becomes the backbone of your risk assessments and your ADMT compliance strategy.

2. Build a CCPA-Ready Risk Assessment Program

Next, organizations need a repeatable process to perform, review, and update risk assessments. A practical framework
usually includes:

  • Templates that capture all the elements required under the regulations
  • A cross-functional review process involving legal, privacy, security, and business owners
  • Documentation of benefits, risks, and mitigations for each processing activity
  • Triggers for when to update assessments (for example, new data types, new model, or new use case)

Think of it as your “privacy design review” for high-risk data and AI projects – not a checkbox exercise you discover
three days before a deadline.

3. Upgrade Your Notices, Policies, and UX

The days of burying automated decision-making in a 4,000-word privacy policy are over. Businesses should:

  • Update privacy policies to describe ADMT uses in plain language
  • Design pre-use notices that are short, prominent, and understandable
  • Ensure opt-out links and mechanisms are easy to find and actually work

This is a great moment to collaborate with UX teams. A well-designed notice can build trust instead of just checking
a compliance box.

4. Coordinate with Vendors and Service Providers

Many organizations rely on third-party tools for scoring, ranking, or automated evaluations (think background checks,
fraud scoring, or ad platforms). The new rules make it even more important to:

  • Update contracts to reflect ADMT, audit, and risk assessment obligations
  • Clarify which party provides notices and handles consumer requests
  • Ensure vendors can support data access, opt-out, and security requirements

“Our vendor handles that” is not going to be a satisfying answer if the CPPA comes knocking.

5. Align Governance, Training, and Culture

Finally, the regulations push organizations toward more mature AI and privacy governance. That often means:

  • Creating or updating AI governance committees or review boards
  • Training engineers, data scientists, and product managers on CCPA AI rules
  • Integrating risk assessments and ADMT checks into product development lifecycles
  • Using readiness assessments ahead of mandated audits to avoid surprises

Done well, this can turn compliance into a competitive advantage: customers, partners, and regulators increasingly
look for evidence that AI is being used responsibly.

How These Rules Fit into the Bigger AI Regulation Story

California’s new regulations didn’t appear in a vacuum. Policymakers are balancing two competing narratives:
“Don’t let AI trample civil rights” and “Don’t chase AI companies out of the state.”

Earlier in 2025, Governor Gavin Newsom publicly warned that overly strict AI regulations could impose billions of
dollars in compliance costs and threaten California’s role as a tech leader, echoing concerns from industry groups.
At the same time, privacy and civil rights advocates argued that strong safeguards are necessary as automated tools
influence hiring, lending, healthcare, and more.

The final CCPA risk assessment and AI regulations reflect that tension: they’re narrower and more targeted than some
early drafts, but still among the most robust AI governance rules in the U.S. They’re also likely to influence other
states and federal regulators, much like the original CCPA did for privacy laws nationwide.

Real-World Experiences: What Implementation Looks Like

So what does all of this feel like on the ground? Let’s walk through some realistic experiences businesses are
already reporting or preparing for as California finalizes these CCPA AI regulations.

A Fintech Company Rebuilds Its Credit Scoring Pipeline

Imagine a mid-size fintech that uses machine-learning models to assess creditworthiness for California consumers.
Under the new rules, its credit-scoring models clearly qualify as ADMT used for significant decisions. The company
has to:

  • Perform a risk assessment documenting the model’s data sources, logic, benefits, and potential harms
  • Explain in plain language how automated tools affect approval decisions
  • Offer consumers specific rights to understand and, in some situations, opt out of automated processing

During this process, the fintech discovers that some features used by the model correlate strongly with sensitive
demographic factors. The risk assessment forces a serious conversation between data science, legal, and compliance
about whether those features are appropriate and how to monitor disparate impact. That conversation probably should
have happened years ago – but the regulations finally make it unavoidable.

An Employer Rethinks AI-Driven Hiring Tools

Now consider a large employer using automated résumé-screening software to rank job applicants. Under the CCPA rules,
that tool is ADMT when it substantially replaces human review in hiring decisions about California candidates.

The HR team suddenly has homework:

  • Work with legal to draft pre-use notices explaining that automated tools help screen applicants
  • Run risk assessments on the tool’s scoring criteria and training data
  • Develop a process so applicants can request more information and, where applicable, challenge or appeal decisions

In practice, some employers find that combining automated screening with meaningful human review leads to better
outcomes anyway. The regulations nudge them toward that hybrid model, reducing reliance on “black box” automation.

A Health-Tech Startup Learns to Love Documentation

A health-tech startup using AI to prioritize patient outreach for chronic care management has a different experience.
Its product uses ADMT to decide which patients receive proactive outreach, reminders, or follow-ups. Under the new
rules, that’s a significant decision involving health-related data.

Initially, the engineering team is frustrated: they feel like they’re spending more time writing documentation than
code. But as they work through the risk assessment template, they surface practical questions:

  • Are we over-indexing on recent visits, and under-serving patients who haven’t been able to come in?
  • Do we have safeguards if the model fails or goes off the rails?
  • Are our security controls strong enough for the sensitivity of this data?

Those questions lead the team to tweak their models, improve monitoring, and strengthen access controls. Patients
never see that behind-the-scenes work, but the net result is a more robust system – and a stronger story for
regulators, partners, and customers.

Common Lessons from Early Adopters

Across industries, organizations that start early are noticing a few recurring themes:

  • Inventory is everything. You can’t manage what you haven’t mapped. AI and “clever scripts” pop
    up in surprising places.
  • Cross-functional teams work better. Privacy, security, product, and data science each see
    different parts of the risk picture.
  • Plain-language explanations are hard but valuable. If you can explain your AI to consumers,
    it’s a good sign you actually understand it internally.
  • Governance scales better than heroics. Having clear processes beats scrambling every time a
    new feature launches or a regulator asks a question.

None of this is effortless. But for organizations that embrace the new CCPA risk assessment and AI regulations as
an opportunity to modernize their data and AI practices, the upside can include fewer surprises, stronger trust,
and a more durable foundation for innovation.

Key Takeaways

California’s finalized CCPA risk assessment and AI regulations represent a major leap forward in how U.S. law treats
automated decision-making and high-risk data uses. They:

  • Define ADMT and focus obligations on “significant decisions” that materially affect people’s lives
  • Require structured privacy risk assessments for high-risk processing
  • Mandate cybersecurity audits for certain businesses handling consumer data at scale
  • Expand consumer rights to transparency, access, and (in some cases) opt-out of AI-driven decisions
  • Push organizations toward more mature, documented AI and privacy governance

For businesses, the message is clear: AI governance is no longer optional, and privacy risk assessments are no longer
a “nice to have.” The sooner you map your AI, organize your documentation, and align your teams, the less painful
the transition will be and the more ready you’ll be for whatever AI rules come next, in California and beyond.