In the last thousand days, AI has moved from a novelty you “chat with” to an engine that can quietly redesign entire business models.
Inference costs have fallen dramatically, from hundreds of dollars per million tokens in early frontier models to under a dollar per million tokens for many modern, optimized models. Major tech companies are announcing sweeping workforce changes tied directly or indirectly to AI, and we’re seeing the first real wave of AI agents that don’t just answer questions, but actually do the work.
If you’re a business leader, risk manager, or consultant, this isn’t a theoretical trend anymore. It’s a live-fire exercise.
The Transformation
Most of us first met modern AI through tools like ChatGPT. We opened an app, typed questions, and got surprisingly good answers. It felt like a supercharged search engine with better manners.
Fast-forward to about a year ago, and the industry started building agents—AI systems that don’t just respond, but take actions. In recent months, those agents began to quietly run workflows: reading documents, writing drafts, testing code, summarizing meetings, building apps, often end-to-end.
The shift is subtle but massive. We’ve gone from “Tell me more about this,” to “Go do this for me and report back.”
Just this fall, Replit released an AI agent that can take a set of instructions and then work for hours, writing code, testing it, iterating, and delivering a working application. The working dev-like agent that mimics what a human software engineer does across an entire mini-project.
While there are time and resource limits, those are compute supply constraints that are being relieved over time, not capability limits, and coding is just the first high-profile example.
Technology Shifts Hit a Different Sector
With AI, the early shockwaves are hitting knowledge workers. Entry-level coding roles are already under pressure. AI is rapidly learning to handle repetitive tasks in accounting, tax, marketing, customer service, and even consulting.
At Cential Consulting, we’ve already used AI to:
- Analyze complex documents in minutes instead of hours.
- Manage risk processes utilizing AI in a fraction of the time, using a GPT trained on preferred style.
- Automate various GRC-related tasks.
And the economics are significant. Humans speak roughly 20,000 “tokens” per day, on average. At current flagship model pricing, you could have an AI “talk” the equivalent of a human’s full daily output for around a dollar per day or less, depending on the model and workload.
The immediate takeaway for executives is obvious: “If we can do more with the same, or more with less, why wouldn’t we?”
Big tech companies are already acting on this logic with large-scale workforce changes and AI-driven efficiency programs.
Short-Term Gains Pose Long-Term Questions
Cutting hundreds of thousands of roles improves quarterly numbers. But it raises deeper questions:
- What happens when the employees you lay off are also your customers?
- How does consumer demand shift when a significant portion of the population is displaced or reskilled?
- How does our economic model evolve when cognitive work can be bought at scale for pennies on the dollar?
We’re optimistic that AI will amplify many employee’s capabilities instead of fully replacing them, though not without reshaping how that work gets done. We don’t have complete answers yet. But this tension between short-term efficiency and long-term societal impact is precisely where risk management shines.
What This Means for Risk & Compliance Professionals
As risk professionals, we can look at AI through two critical lenses.
1. The Business Strategy Lens
Your organization will increasingly use AI to meet strategic objectives if it wants to stay competitive.
Your responsibilities here:
- Move from denial to acceptance. AI is already being used in the first line of business. If risk/compliance sits in perpetual “no” mode, you don’t stop AI, you just lose influence.
- Understand the strategy. Talk to executives. How does AI fit into growth, cost optimization, innovation, and customer experience? Where are they betting big?
- Align risk discussions with value creation. Instead of framing AI solely as a top risk, embed it into every relevant strategic objective by considering, “How could AI enable this objective? What risks come with that? How do we manage them without killing the upside?”
Over time, AI will likely disappear as a standalone risk, just like internet usage, cloud computing, and social media use did, and will become baked into existing processes.
2. The Functional Lens
The second lens is more personal: How will you use AI to transform your own function?
There are already concrete examples:
- Risk writing: Using a GPT fine-tuned on your organization’s risk language to draft risks that are directly usable saves hours per project.
- Document-heavy analysis: AI can ingest policies, contracts, tax documents, and reports, then surface patterns, exceptions, and key risks in minutes.
- Near real-time risk assessment: Instead of periodic, spreadsheet-driven assessments, AI can continuously monitor inputs, summarize changes, and surface emerging issues.
If you have a repeatable task, workflow, or template-driven process, there is almost always a way to:
- Automate part of it.
- Standardize quality.
- Dramatically reduce time spent.
At this point, the constraint isn’t the tech, but the imagination and willingness of the people using it.
Where Do Risk Leaders Go From Here?
If you’re a decision-maker or risk leader, your next steps don’t need to be complicated. Focus on three priorities:
1. Bring AI into the strategic conversation as a lever. Talk with executives about where AI can accelerate objectives, reduce costs, or expand capabilities. Make AI part of strategy discussions, not a separate compliance topic.
2. Shift your risk framing. Stop treating AI as a standalone top risk. Instead, embed AI-related considerations (data use, bias, operational reliance) directly into existing risk categories. That’s where it will naturally live going forward.
3. Modernize your own function. Look for repeatable tasks where AI can save time or improve quality: policy analysis, risk write-ups, issue summaries, and control testing. Experiment, refine, and scale what works. Governance still matters, but it should enable responsible use rather than stall it.
These steps move risk teams from gatekeeping to guiding the organization forward.
The Real Risk is Standing Still
AI adoption is accelerating, whether risk leaders participate or not. Choosing to “wait and see” means falling behind both your business and your peers. The organizations that win will be those whose risk teams help them use AI responsibly to create value, drive efficiency, and reimagine workflows.
You don’t need to have everything figured out today. But you do need to start moving. The real risk now isn’t adopting AI too early, but adopting it too late.
Curious how AI fits into your risk management strategy? Schedule a consultation with our risk advisory team today.