You’re Using AI Wrong
And You’re Going to Regret It
I want to share a method I’m developing in public. I am still early in this work. Even this Substack is an early peek at ideas that are still in the process of refinement and development. I don’t want to pretend I’ve arrived at a perfect “universal law” yet just because I can feel the principle emerging.
People are using AI wrong, and I believe they are going to regret it. Not because AI is bad, and not because I am anti‑efficiency. I use AI daily. I love leverage. But there is one specific misuse that is destructive to rainmakers. It will show up as fewer customers, weaker positioning, poorer performing ads, lower conversion, wasted spend, and less income. Most people will not realise the cost until it’s too late.
This crystallised for me in a conversation with a mentee.
The treadmill moment
He is a bright kid, early in his career. Proactive. Methodical. His record‑keeping is impeccable. He takes notes properly and keeps them, which already puts him ahead of most people. So I asked him a question I have asked many times: how do you study material when you want to learn something important?
He told me he gets AI to summarise the content he wants to learn. Then he converts the summary to audio. Then he listens to the audio while he is on the treadmill. That is his primary method of assimilating what he considers important.
I was gobsmacked.
Not because summaries are evil, but because this is the kind of behaviour that slowly trains a person to accept second‑hand understanding as if it were mastery. Like watching someone outsource the very work that builds competence.
The interpretation layer nobody asked for
If I consider information important, I want the source. I want the author’s intent in the author’s own words. I want the original context, the caveats, the framing, and the nuance that makes the idea usable. I do not want an interpretation step between me and the material that will shape my judgement.
That is why original marketing texts line my bookshelf.
When I wanted to understand the 4 Ps, I wanted to understand what E. Jerome McCarthy was actually doing when he first formalised the framework in Basic Marketing (1960). When I wanted to understand the Unique Selling Proposition, I did not want a modern paraphrase. I wanted Rosser Reeves as he wrote Reality in Advertising (1961), in the environment that forced that concept into existence. I went out of my way to source these texts in their original form.
This is not nostalgia. This is not intellectual posturing. This is about protecting the integrity of my understanding when the stakes are real.
Because when you let AI “read” for you, you are not consuming the source. You are consuming an interpretation of the source. And then you start building your decisions on that interpretation.
Who is to say how AI is going to interpret the material? Who is to say what bias is introduced? So much lives in nuance. AI can sound confident while missing the point, flattening the argument, or shifting emphasis in subtle ways that change what the author meant.
If that happens while you are studying a serious method, you are not just learning slower. You are learning wrong.
Fleeting information vs foundational information
Here is the distinction that entrepreneurship forced me to take seriously: not all information deserves the same treatment.
Some information is fleeting. News, commentary, opinion, trends, hot takes. That content is not intended to change your operating system. You can consume it lightly because it is not something you should be building your judgement on.
Other information is foundational. Principles. Methods. Frameworks. The kind of ideas that solve problems repeatedly. The kind that stay useful long after the headline fades. This is the information rainmakers build offers, positioning, campaigns, and businesses on. If you misunderstand this category, you do not just “sound a bit off.” You lose money.
This is where many people get trapped. In school, and even in corporate life, it can be enough to “know stuff.” You can survive on being informed. You can sound smart at social gatherings. You can accumulate perspectives.
Entrepreneurship does not reward that. Entrepreneurship demands methods that work. When your income depends on your decisions, you stop collecting trivia and start hunting for usable principles. You stop trying to sound smart and start trying to be effective.
That shift changed how I study.
When I study something important, I study actively. I am constantly asking one question: how do I put this into use? I apply the idea to real situations, past and present. I draw pictures. I test the concept against reality. I refuse to gloss over critical details. I understand every square inch.
That is what builds judgement. And judgement is what makes a rainmaker dangerous, in the best sense of the word.
The deadly mistake
Here is the line I am drawing, and I am drawing it hard.
Using AI to design your outputs is safe. Use it to help you write emails, draft posts, refine a deck, generate creative options, tighten copy, create a video, edit a script, or organise your thinking into publishable form. This is a very good use of AI.
Using AI to design your inputs is deadly. Using it to decide what you should understand, what you should notice, what you should believe, what you should take away from an important document, is how rainmakers become spectators in their own careers.
If you let AI intercept your learning, your market understanding, or your interpretation of customer reality, you are letting a machine shape your lens. Your lens shapes your decisions. Your decisions shape your pipeline, your positioning, your ads, your revenue, and your life.
Here is a simple analogy. Imagine installing something into your vision that intercepts how you see the world, shapes it, smooths it, “summarises” it for convenience, and then presents it back to you as reality. Would you want that? Would you trust AI to tell you what is happening around you over your own eyes?
Yet that is what people are doing inadvertently.
How this kills campaigns in the real world
We learned this the hard way at Blacfox.
We trusted AI to provide “market feedback.” We used it as a shortcut. We thought we were accelerating learning and tightening our message.
In hindsight, we were not getting feedback. We were getting a convincing remix. It was remixing blog content furnished by the software vendors. It sounded coherent. It sounded market aware. It sounded like truth. It was not anchored in actual buyers.
It really is not rocket science. We ought to have collected market feedback from the mouths of potential buyers. After all, this counts as usable important information. Instead, we accepted synthetic “feedback” and paid a heavy price for this choice.
The result was campaign death.
I also see some clients do this. They will use AI to “review” work we do, as if AI knows their tastes and preferences better than they do. Worst still, in the name of saving time, they are passively observing how AI reviews something, instead of reading, assimilating, and understanding something for themselves. They are left none the wiser. They become an innocent spectator in their own lives.
And rainmakers cannot afford that.
If you are responsible for revenue, you cannot outsource your understanding of your market. You cannot outsource the formation of your judgement. You cannot outsource reality.
A principle I am still testing
I am still cautious about prematurely declaring a universal law here. But a general principle is emerging, and I am testing it aggressively:
Never use AI to interpret data only you can.
If the information is foundational, go to source. See for yourself what the author is trying to say. If market feedback is what you’re after, go to the market. If the truth is in customer conversations, go to the conversations. Do not let a machine generate your understanding of the thing you are betting your income on.
Use AI after you have done the human work. Use it to draft, refine, and publish once you know what is true. Use it to speed up the output side of the work, not to replace the input side.
As we head into 2026, output is becoming cheap. Prompts will get better. Models will get smarter. Content will be infinite. The rainmakers who win will not be the ones who outsource their thinking the most efficiently. They will be the ones who protect their input and earn judgement.
In an AI era where output is cheap, the scarce advantage will be human judgement, and judgement is built, slowly, by how you ingest reality.
Glad you are here with me. Here’s to a killer 2026!
Make it rAIn, KG



