Generative AI in the Enterprise

AI is a Chainsaw. Are You Using It Without a Guard?

Written by David Russell Published on 5 minutes read
AI is a Chainsaw. Are You Using It Without a Guard?

Over the last 24 hours, I've run into three completely different situations that all point to the same, unnerving conclusion: our collective trust in AI is dangerously high, and our processes are terrifyingly immature. We're handing people a powerful tool, but we're forgetting the safety manual and, most importantly, the guard.

AI - and Large Language Models (LLMs) in particular - are like a chainsaw. In the hands of a skilled operator who respects the tool, it can clear a forest. In the hands of a novice, it can take off a leg. Right now, a lot of businesses are gleefully revving up the engine with no guard in sight.


Exhibit A: The “Hyperpersonalized” Fail

It started this morning with a cold email.

David, I want to introduce you to Newstool: a single email each morning with everything your competitors are doing...
Can I send a sample comparing Cortado Group against Menzies Aviation and Alliance Ground International?

I had to laugh. The AI-powered tool this salesperson used to find our “top competitors” pulled two names that are so wildly incorrect, it's clear they have no idea what our company does. Their attempt at “hyperpersonalization” immediately exposed their process as a hollow gimmick.

This is a classic case of Garbage In, Garbage Out.
The AI didn’t fail - the data it was fed failed. The system gobbled up bad data and confidently spat out a personalized error.

I replied immediately, asking how they identified those companies.
Thirteen hours later? Silence. The window for a meaningful conversation slammed shut long ago.

Research from institutions like Harvard Business Review has shown that firms that try to contact potential customers within an hour of receiving a query are nearly seven times as likely to have a meaningful conversation as those that waited even an hour longer.

After 13 hours, my interest is gone. Their over-reliance on a flawed automated process didn’t just fail to start a conversation - it actively prevented one.


Exhibit B: The Internal “Black Box” Dilemma

The second instance came from an internal discussion with our innovation team. We were debating the merits of different data enrichment tools, and a key theme emerged: a deep-seated distrust of “black box” AI.

One of my team members, Mitch, pointed out that a tool we were considering was just “too black box.” He recounted a near-miss with a major client, where the AI’s opaque reasoning almost caused a significant problem. We simply couldn’t explain how the AI arrived at its conclusions.

This hits on two critical principles of data quality:

  • AI Explainability: The AI “shoves all the stuff in a box and then comes up with correlations that you might not understand.”
    If you can’t understand the logic, you can’t trust the output. You’re just blindly following a machine’s recommendation.
  • Data Provenance: Where did this information come from?
    If you can’t trace data back to its source, you can’t verify its accuracy or reliability. It’s the data equivalent of “some guy told me.”

Without transparency and traceability, you’re not using a tool - you’re gambling.


Exhibit C: The Hallucination Nightmare

The final, and most alarming, piece of the puzzle came from two separate conversations.

First, one of our sharpest consultants, Jessica, expressed her mounting frustration:

I'm still dealing with ChatGPT making things up, and it's extremely frustrating.

This isn’t a minor bug; it’s a fundamental flaw we call hallucination.
The AI isn’t just wrong - it’s confidently and creatively wrong.

Her frustration was put into terrifying context when our RevOps practice lead, Robert, brought a recent news story to my attention.

A team at Deloitte had used an LLM to help synthesize research for a report for the Australian government. The AI did what it sometimes does: it hallucinated - fabricating completely false information, including citations to non-existent academic research.

The result was a public scandal that became a “huge black mark” on the use of LLMs in consulting.
The tool that was meant to create efficiency instead created a crisis of credibility.


It’s Time to Build the Guardrails

These three stories - a failed sales pitch, an internal debate on opaque tools, and a real-world hallucination disaster - are symptoms of the same disease.

We’re treating AI like a magic wand instead of the powerful, dangerous, and often flawed tool it is.

Saying “everyone should just use ChatGPT” is like saying “everyone should just use a chainsaw.”
It’s reckless.

The solution isn’t to abandon the tool - it’s to build the guardrails and implement rigorous processes.
This is what true AI adoption looks like. It’s not about access; it’s about architecture.


Here’s what that looks like in practice:

  • Automated Verification
    Stop relying on humans to manually fact-check everything.
    Build programmatic validation layers into your process.
    For our interview analysis tool, we’re implementing evidence thresholds:

    If the evidence for this conclusion is greater than 70%, give me a ‘yes.’

  • Source Grounding
    Require the AI to cite its sources for any factual claim.
    If it can’t show you where it got the information, the information is suspect by default.

  • Structured Prompts
    Move away from simple chat interfaces for critical tasks.
    Use structured schemas that force the model to separate facts from assumptions.
  • A “Refuse When Unsure” Rule
    The most important guardrail of all.
    Program your systems to prefer an explicit “I don’t have enough information” over a confident but fabricated guess.

From “Prompt-and-Pray” to Proof and Process

We need to shift our mindset from “prompt-and-pray” to one of systematic, automated validation.

The real work of AI innovation isn’t just in using the tool - it’s in building the safety systems that make it reliable, trustworthy, and ready for the enterprise.

Otherwise, we’re just waiting for the accident to happen.

Turning Conversations Into Coaching: The Future of Personal Development
Generative AI in the Enterprise 3 min read

Turning Conversations Into Coaching: The Future of Personal Development

If you really want to understand how a company works, don’t look at the org chart - look at how people talk. Every meeting, every misfire, every “wait, that’s…

By David Russell Published on Oct 30, 2025 3 minutes
Conversations to Conversions 3 - Smaller apps bigger lift
Generative AI in the Enterprise 3 min read

Conversations to Conversions 3 - Smaller apps bigger lift

We shipped work that turns conversations into durable outcomes faster, cheaper, and with clearer audit trails. The headline this week is simple - smaller…

By David Russell Published on Oct 13, 2025 3 minutes