TrustMeBro desk Source-first summaries Searchable archive
Tuesday, April 7, 2026
💰 business

AI is moving fast. CFOs have a narrow window to shape its...

At Fortune's Modern CFO dinner, a Stanford AI researcher talks about harnessing AI value.

More from business
AI is moving fast. CFOs have a narrow window to shape its...
Source: Fortune

What’s Happening

Listen up: At Fortune’s Modern CFO dinner, a Stanford AI researcher talks about harnessing AI value.

AI is moving fast, but many companies still have not decided who should own the job of turning that momentum into measurable business value. At Fortune’s Modern CFO dinner in San Francisco last Thursday, sponsored by Deloitte and ServiceNow , Melissa Valentine, a senior fellow at Stanford Institute for Human-Centered AI, delivered a clear message to CFOs: they have a narrowing window to take command of AI value creation. (shocking, we know)

Recommended Video Valentine pointed to a recent Harvard Business Review article of the Return on AI Institute, citing survey findings that underscore this opening.

The Details

Only 2% of the C-suite executives surveyed dropped CFOs were charged with capturing value from AI. Yet when CFOs were responsible, 76% reported generating substantial value, well ahead of other functions.

Laks Srinivasan, coauthor of that report, told me that finance chiefs are uniquely positioned to define, evaluate, fund, and measure AI initiatives, then apply that framework across the company. Valentine, a tenured associate professor of management science and engineering at Stanford’s School of Engineering, told the room of finance chiefs that CFOs have a strategic opening to lead on AI if they are willing to quantify the value and be accountable for it.

Why This Matters

She argued that generative AI is moving out of its experimental phase and into something CFOs know well: systematic measurement. Two years ago, she dropped, rigorous accountability would have been premature. On the question of guardrails, Valentine pointed to a recent incident in which Anthropic inadvertently exposed internal source code for its Claude coding tool, offering a rare public glimpse into how frontier AI labs protect their models.

Market watchers are paying close attention to developments like this.

The Bottom Line

On the question of guardrails, Valentine pointed to a recent incident in which Anthropic inadvertently exposed internal source code for its Claude coding tool, offering a rare public glimpse into how frontier AI labs protect their models. She called attention to the concept of “harness engineering,” the infrastructure surrounding models to make them usable and safe, including secondary AI systems designed to monitor primary ones.

Is this a W or an L? You decide.

Daily briefing

Get the next useful briefing

If this story was worth your time, the next one should be too. Get the daily briefing in one clean email.

Reader reaction

Continue reading

More from this section

More business