There’s a question I’ve been mulling for a while now, and I think it’s time to write it down: when is it okay to use generative AI in a given business context, and when does it cross a line? I’ll focus on two specific areas I know well — board work and strategic analysis — but I think the principles generalize.
Let me start with what I think is the easy part. Using AI to draft a meeting agenda? Fine. Using it to generate a board deck? Also fine, though you’ll probably go to manual edits after the first or second draft. Using it to produce a document summary? Fine [1]. These are tasks where AI is essentially doing the grunt work of organizing information you already possess, and where the human judgment — yours — is the thing that actually matters.
Using AI to produce final documents? That’s dicier today — ask anyone in legal — but I think there’s a simple rule that applies to all of these examples.
That rule? Use AI to do whatever you want, but you own the output. Not the AI. You. If it’s wrong, that’s on you. If it misses something important, that’s on you. The moment you present something to a meeting, a customer, or a board, you are vouching for it. Saying “well, AI generated that part” is not a defense. It’s an abdication of duty.
The “Ad Hominem” Problem
Here’s something that bothers me about the discourse around AI-generated content: people hear, or even suspect [2], that AI wrote something and it’s immediately dismissed — not because of anything wrong with the content, but because of how it was produced. That’s a logical fallacy. Specifically, it’s a variant of ad hominem: attacking the source rather than the argument [3].
I frequently need to remind people of this. Judge what was said, not who — or what — said it [4] [5]. If the analysis is sound, the framing is useful, and the questions raised are the right ones, then the mechanism of production is largely beside the point. The quality of the thinking is what matters and what should be challenged.
That said — and this is important — the inverse is also true. Producing AI-generated content and presenting it as your own thinking is not okay. The problem isn’t that AI helped. The problem is the pretense that you did the thinking when you didn’t. Ownership means you’ve read it, challenged it, corrected it where it was wrong, and can defend it. If you can’t do that, you haven’t done your job.
AI as Calculator
I’ve always thought the right analogy for AI is the calculator. A wildly more powerful calculator, obviously, but a calculator nonetheless.
When calculators became ubiquitous, people lamented the loss of slide rule proficiency. And yes, something was lost. But the point of mathematics was never arithmetic. It was reasoning. If the calculator handled the arithmetic error-free, you could spend more time on the part that actually matters. The same logic applies here: there’s a lot more to argument and strategy than copywriting or slide formatting. If AI can handle the scaffolding, you should be able to spend more time on the substance.
The complication — and it’s a real one — is that AI can start to approximate thinking in ways a calculator never could. A calculator doesn’t write your memo. It doesn’t suggest your strategy. It doesn’t synthesize twenty pages of board material into five crisp questions. AI does all of that. And that creates a temptation toward laziness that calculators simply didn’t. The laziness is the problem, not the temptation toward it.
There’s also research starting to emerge suggesting that relying too heavily on AI can actually impair your own reasoning. You offload the synthesis, and you stop synthesizing. You offload the framing, and you stop framing. The cognitive muscle atrophies.
I was not surprised when I read reports that people with long streaks in Duolingo couldn’t speak well in practice. In my view, as a half-decent French speaker: if it doesn’t feel like work, you’re probably not learning [6]. Corrolary: if it doesn’t feel like work, you’re definitely not working.
How I Use AI in Board Work
Here’s what I often do with AI today. I sit on several boards, and I’ll sometimes load a board deck into a generative AI tool before the meeting. I ask for a summary. Then I’ll ask how it thinks the company is doing. I’ll then ask for the top 5 questions to ask in the meeting. Then, I’ll go read the deck with an eye toward what’s been extracted [7].
And then I’ll go back and challenge the AI. I think issue three is more important than issue one. I think it missed issue seven totally. I think issue two isn’t an issue; the company’s fixed it already. Often, I’ll bring competition into the picture because (in my humble opinion) most boards don’t spend enough time thinking about competition [8].
And here’s the question I’ve been wrestling with: should I be transparent about using AI to help generate those issues (or questions) when I bring it to a board meeting?
My instinct is yes. If I want to send the CEO a list of top five issues facing the company before the meeting, I have two choices:
- I can pretend I wrote it myself, unassisted. Complete with typos and hyphens instead of em-dashes.
- I can say, “here’s what I generated with Claude after iterating on your board deck” and copy/paste the final transcript.
Now, I know what management can think: “Well, we could have asked Claude, too” [9]. And I’m okay with that. My response would be: “Well, then, why didn’t you?” I just want the best topics list.
To me, the question isn’t where the list came from. The question is whether it’s the right list. That’s the only question that matters. Boards have very limited time together. We should think hard and use all available tools to ensure that we’re spending that time on the right issues.
The point isn’t the slides or the questions list or the agenda or the summary. The point is the conversation. To maximize value, we need to be having the right conversation. No talking about things easy to talk about. Not going through the motions. Not death marches through templates, much as I love both templates and death marches.
This takes me back to calculators. I can check the math on your board slides using pencil and paper. Or I can use a calculator. Or I can upload your table and ask Claude to check the math. We can take a test with our calculators secretly on our laps or with them in plain sight on our desks.
I vote for the second option. Use all available tools. Don’t use them clandestinely. Use them out in the open. But don’t abdicate to them. Own the output. This isn’t Claude’s list of our top five challenges. It’s my list, built using Claude [10]. Better yet, it’s my list, period. (But I’m not going to hide that I used Claude to help build it much as I wouldn’t hide that I used a computer and a keyboard.)
That’s where I am. I’m curious where you are. Is there a line you’ve drawn in your own work? Do you think transparency about AI assistance is a norm we should be enforcing, or are we creating a two-tiered standard we’d never apply to other tools? Let me know in the comments.
# # #
Notes
[1] Just as long as you also read it or are prepared to say, “I didn’t read it, I only read a summary.” FWIW, I find it useful to generate a summary, read it, and then read the document. And sometimes, then go back to the summary. The summary ends up serving as a reading guide.
[2] Thanks to tells like the dreaded em-dash.
[3] I had to air quote ad hominem because — thanks to my high school Latin teacher, Mr. Maddaloni — I know that ad hominem means literally “toward the man.” There is thus not only gendered language (heck, it was nearly 3,000 years ago) but considerable irony in speaking of ad hominem attacks on a machine. Ad machina, anyone?
[4] By the way, this is the exact opposite of most social media behavior.
[5] This isn’t just good intellectual hygiene — it’s a reliable way to reduce or eliminate bias. When you evaluate an argument on its merits rather than its source, you sidestep a whole class of distortions: the tendency to over-credit ideas from high-status people, to dismiss ideas from unexpected sources, or to reject a perfectly sound analysis because you don’t like the messenger. It’s a discipline worth practicing whether the source is a junior analyst, a competitor, or a language model. The argument either holds up or it doesn’t. That’s the only test that matters.
[6] This is a critique on gamification, but also is highly related to the topic of customer value metrics, about which I’ve written with my Balderton EIR colleague Dan Teodosiu.
[7] By the way, the wordier the board deck, the more this process helps.
[8] This itself could be a long discussion but remember three things: my first job in marketing was competitive analyst; I believe strategy is either “the plan to win” (Burgelman) or “the way to overcome our biggest challenge” (Rumelt), and ergo it cannot be done without looking at the market. North Stars are great, but they don’t tell you about the army you’re going to face when you hit latitude 55 degrees 45 minutes.
[9] And this is probably the kindest thought. Others might include:
- Perfect — now the monkeys have flamethrowers
- Fantastic — it’s like giving toddlers espresso and a whiteboard
- Great — now the VCs can skip even faster to the wrong conclusion
- Terrific — now it’s gut feel with citations
- Right — so now we’re pattern matching with turbo-autocomplete
(And those are manual em-dashes.)
[10] I assume that we are not all going to have the same AI conversation or all use the same tools. The way I push Claude is going to be different from the way another person does.


































