top of page
Search

AI Reflects the Quality of Our Thinking

  • 10h
  • 3 min read
Technology can accelerate execution. Judgment, however, still belongs to us.
Technology can accelerate execution. Judgment, however, still belongs to us.

Over the past year, I have watched organizations invest significant time and money in AI training. Teams are learning prompts, experimenting with tools, building workflows, and comparing platforms. All of that is useful. It signals curiosity and momentum.


At the same time, I keep wondering whether we are skipping an essential step.

AI does not struggle because someone typed the wrong command. It struggles when the underlying business question has not been clearly defined. When the framing is loose, the output may look polished, but it rarely moves anything meaningful forward.


AI amplifies what it is given. When the thinking beneath a project is sharp, structured, and anchored in real objectives, the output can be powerful. When the thinking is vague or reactive, the technology simply produces faster versions of that same ambiguity.


My concern is not that teams will fail to learn the tools. My concern is that they will scale work that was never fully clarified in the first place.


Before I open AI to help with a problem, I start somewhere else.


First, I define the decision. Every initiative is meant to influence something. It might be revenue growth, reputation, relationship depth, or internal alignment. A proposal, a campaign, or a deck is not the strategy. It is an expression of the strategy. If I cannot articulate the decision that the work is meant to influence, then I am not ready to generate anything.


Next, I make the constraints explicit. Budget, timing, politics, brand equity, risk tolerance, and internal capacity all shape the outcome. Leaders often assume these are understood, but they are rarely written down. AI can reason within constraints effectively when they are clearly stated. When they are not, it fills in the gaps based on generic assumptions.


I also surface the assumptions that are guiding the work. What are we presuming about the buyer? About the market? About what differentiates us? One of the most valuable ways I use AI is to challenge my own reasoning. I ask it to identify weaknesses in the argument, to point out risks I may be overlooking, and to offer alternative framings that I may not have considered. That is not content generation. It is structured thinking.


Another discipline that matters is separating exploration from production. Many teams move directly to drafting. I prefer to use AI first to explore the landscape of the issue. What angles have we not considered? What unintended consequences might follow? Where could a stakeholder push back? Once the reasoning is stronger, then production becomes more straightforward and more strategic.


Finally, I define success before anything is written. What outcome would signal that this worked? What behavior should change as a result? What conversation should shift? Without that clarity, evaluation defaults to surface-level measures rather than meaningful impact.


The opportunity with AI is not simply operational efficiency. It is the chance to raise the standard of how we think about problems. When leaders use AI as a structured thinking partner rather than a rapid drafting assistant, it exposes the strength or weakness of the framing itself.


The organizations that benefit most will not necessarily be the ones with the most advanced workflows. They will be the ones whose leaders have developed the discipline to define questions clearly, articulate constraints honestly, and rigorously examine their own assumptions.


Technology can accelerate execution. Judgment, however, still belongs to us.

 
 
 

Comments


bottom of page