AI AI AI.
That’s all we hear about these days.
Well, let me tell you, the hype is worth it.
Why? AI’s are exemplary reasoners, thinkers, researchers, and processors of information. While their outputs are not always perfect truths… humans don’t produce perfect truths either. But… with Chain of Thought Reasoning, the reasoning pathway of an AI becomes quite clear. We can start to see how AI arrives at conclusions.
“I think generally people’s thinking process is too bound by convention or analogy to prior experiences... They’ll say, ‘We’ll do that because it’s always been done that way,’ or assume if nobody’s done it, it must not be good. But that’s just a ridiculous way to think. You have to build up the reasoning from the ground up — from the first principles… look at the fundamentals and construct your reasoning from that.” — Elon Musk
Today, we’re going to sharpen our critical thinking skills, learn to make better decisions, and transform the quality of our work. This will happen via an exploration of how humans reason and how AI’s reason.
Table of Contents
Mathematicians and Programmers Reasoning
Chain of Thought
Chain of Thought by the World’s Greatest Executives
AI’s Ability to Verify It’s Thinking
The DeepSeek R1 Reasoning Blueprint
Let’s explore the world’s best reasoners.
Mathematicians are powerful reasoners. Why? They can deeply structure their thinking. They consider the properties of numbers and use them to draw the important conclusions and insights that fuel Physics and Engineering. They collect facts and use the rules of logic to create theories. These theories often lead to powerful technological breakthroughs.
Programmers are forced to be great reasoners. They design systems and their components into building blocks that fit together. If the building blocks don’t fit together, software breaks. Often, the pieces don’t fit together because a programmer does not have the correct conceptual understanding of how all their building blocks connect. And wow, we programmers work with a lot of building blocks. Variables, components from libraries we did not write ourselves, classes, functions, objects further down the stack, GPU’s, disk space, RAM… and so much more.
It’s all dependent on how proficiently we are able to connect complex building blocks together. It’s all dependent on how well we are able to reason about how we are building objects. In the end, coding allows us to store and analyze data to make better decisions about how we craft medicines, design cities, heal people’s minds, and operate businesses. So, the more powerful our reasoning, the more powerful our designs.
Humans are intuitive, and they have an understanding of truth that AI’s still lack. AI’s build probabilistic models of the world. They understand how the world ‘probably works’. By being trained on vast chains of language… they can understand and model relationships between objects and fields. The best model creators have figured out how to do this extremely effectively. Up to the point where models have become extraordinary creative partners and can create very close approximations to the truth. Up to the point where models can effectively think to solve problems, use apps, and do research
Chain of Thought (CoT)
Initially, LLMs produced great answers, but they weren’t extraordinary. In 2022, researchers at Google discovered something magic. By simply adding the cue—“Let’s think step by step”—in prompts, the model started printing its thinking before actually producing answers.
By thinking one step at a time, these AI models produced significantly better answers. This way of operating is called Chain of Thought reasoning.
Instead of directly asking an LLM for a final answer to complex questions, Chain of Thought Reasoning (CoT) encourages models to “Think out loud” or show their work. This forces the model to generate reasoning steps that it uses to build out its reasoning and arguments. By understanding how a model arrives at an answer, we can see the steps (and flaws) in its thinking.
Models are often better at reasoning than people. Often, we humans are not forced to show our reasoning. This leads to flaws in our thinking because we take logical leaps. We don’t ‘show our work’, for each object we reason about and produce.
Models have infinitely more energy, and in many cases, access to far deeper repositories of knowledge. Models follow instructions and don’t deviate like we do. By that, I mean that they don’t get distracted. They don’t go off-piste like our minds do in vast landscapes of emotions and dynamic situations. **For the most part**. If you give them a task, they will complete it.
If you skip over to the DeepSeekR1 Reasoning Example, you will see a great example of this.
Personal Chain of Thought: Post Writing
Here’s a chain of thought in the style of how a model would do it. An example of my own reasoning:
To get someone to read my post, I need to draw them in.
To draw someone in, I need a great hook.
Once they’re in, I need the post to be worth their time.
To make something worth someone’s time, I need the post to be as useful as possible.
For a post to be useful, I need to make a post that will help them achieve their goals.
My audience can use AI to achieve their goals.
By understanding AI better, they will be able to better accomplish their goals.
So, I will write a post to help them better understand AI and their own minds, so that they can better achieve their goals.
AI has the capacity to interact intelligently with the world. It has the ability to break down problems into fundamental components and solve them one at a time until it arrives at a reasonable answer.
Why?
SO? Why is this useful? The greatest thinkers use logic to create the best strategies. Larry Ellison. Elon Musk. Arvind Krishna. Steve Jobs. Sam Altman. The ability to be able to predict the movements of an industry and build your skills around them can lead you to extraordinary pathways. The best way to do this is by tracking your own chains of thought.
“I think the biggest competitive advantage in business — either for a company or for an individual’s career — is long-term thinking with a broad view of how different systems in the world are going to come together… In a world where almost no one takes a truly long-term view, the market richly rewards those who do.” - Sam Altman
There is so much power in being able to deploy capital at massive scales.
The best thinkers on the planet are able to discipline themselves and think one step at a time. To reason and take extraordinary actions.
AI models don’t need the discipline. They just do it. They can string facts together in chains of thought as we described above. They can think one step at a time and hold multiple objects of the universe in mind.
Therefore, we should treat AI’s wonderful logical partners. However, we have the responsibility of being the arbiters of truth. The ultimate decision makers.
An Example of Steve Job’s Chain of Thought
To build great technology, I need to create a life-changing experience.
To create a life-changing experience, I must design the simplest, most beautiful path.
To protect this simplicity, I need to ruthlessly cut features and distractions.
To make the product unforgettable, I need to craft a great story.
To perfect this story, I need to hone the end-to-end experience.
To stay ahead, I must anticipate the user’s needs before they know them.
To make it matter, I need to design for emotion—joy, wonder, power.
To protect the vision, I must say ‘no’ a thousand times.
To make it timeless, I must connect it to a deeper human need.
To make it effortless, I must make the complex feel simple.
An Example of Jensen Huang’s Chain of Thought:
To build great technology, I must ride an inevitable technological wave.
To ride the wave, I must accelerate it and make it accessible to the world.
To make it unstoppable, I must build platforms, not just products.
To dominate the platforms, I must master the hardest, most valuable layer.
To build long-term advantage, I must create virtuous cycles that self-improve.
To achieve massive impact, I must design for parallelism and scalability from day one.
To compound power, I must build integrated stacks where each layer reinforces the next.
To win, I must commit to relentless iteration and the willingness to suffer for the solution.
To expand my moat, I must turn competitors into customers by building gravitational platforms.
To make this truly matter, I must design technology that moves humanity forward.
Use these Chains of Thought as examples to structure your own thinking. Build extraordinary projects. Go BIG.
AI Can Verify Its Thinking: Sushi Example
Remember, AI doesn’t just use its own knowledge of the world to reason. It can also verify its thinking with code. It can, for example, create a python file, and calculate the answer to a problem using a real calculator embedded in the code. If you ask it to write a program, it can also write the program, test the code, and only give outputs once the code works. This is key.
Here’s an example of how I used AI to order sushi this week. This is the code it put together:
# Let's estimate the protein and calorie content based on standard serving sizes for sushi.
# Typical nutritional values:
# Albacore (per piece): ~5g protein, ~30 calories (hand roll may have some rice)
# Akami (lean tuna, per piece): ~7g protein, ~40 calories
# Nigiri rice (per piece): ~10g rice = ~35 calories
# Albacore hand roll: Typically includes ~1oz fish (~7g protein), ~15g rice (~55 calories), seaweed (~5 calories)
# Albacore sashimi: Typically ~1oz per piece (~7g protein), minimal calories from garnishes
# User's meal breakdown
albacore_hand_rolls = 3
albacore_sashimi_pieces = 3
akami_nigiri_pieces = 2
# Protein and calorie estimates
protein_per_hand_roll = 7 # grams
calories_per_hand_roll = 7 * 4 + 55 + 5 # protein + rice + seaweed
protein_per_sashimi = 7 # grams
calories_per_sashimi = 7 * 4 # protein only
protein_per_akami = 7 # grams
calories_per_akami = 7 * 4 + 35 # protein + rice
# Totals
total_protein = (
albacore_hand_rolls * protein_per_hand_roll +
albacore_sashimi_pieces * protein_per_sashimi +
akami_nigiri_pieces * protein_per_akami
)
total_calories = (
albacore_hand_rolls * calories_per_hand_roll +
albacore_sashimi_pieces * calories_per_sashimi +
akami_nigiri_pieces * calories_per_akami
)
# Total rice
rice_from_hand_rolls = albacore_hand_rolls * 15 # grams
rice_from_nigiri = akami_nigiri_pieces * 10 # grams
total_rice = rice_from_hand_rolls + rice_from_nigiri # grams
total_rice_calories = total_rice * 4 # calories from rice
(total_protein, total_calories, total_rice, total_rice_calories)
It used its internal reasoning systems in order to create this program and calculate everything mathematically. Of course, the calorie estimations will not be perfectly accurate. But it can certainly do a wonderful job.
That’s the thing. When we think, we are holding certain abstract concepts in our mind. Often, the best philosophers and logicians can surgically extract truths from large bodies of text and data using Predicate Logic and Systems Thinking.
While an AI doesn’t do exactly this (to my knowledge)… it can get very very close using Chain of Thought Reasoning strategies. Here’s an example of DeepSeekR1 Reasoning on a problem I asked it. This is EXEMPLARY Chain of Thought Reasoning:
The DeepSeekR1 Reasoning Blueprint
Question I asked
How can I build a business around creating extraordinary applications with new AI capabilities and creating wonderful content? Think deeply about how to do this based on the fundamental facts of the world.
Reasoning Chain of Thought
Okay, the user is asking how to build a business around AI applications and content creation, but they specifically want it grounded in "fundamental facts of the world."
That's a refreshingly deep approach—they're not looking for hype or fluff.
Looking back at the history, the user previously asked about computer architecture's value for programmers. That tells me they appreciate foundational knowledge and systemic thinking.
Probably a technical or entrepreneurial background. Now they're scaling that mindset to business strategy. The phrase "extraordinary applications" and "wonderful content" suggests they care about both utility and artistry. Not just chasing profits—they want meaningful impact.
The word "fundamental" is key here. They're implicitly rejecting superficial AI trends and seeking durable principles. Hmm... to address this properly, I need to anchor every suggestion in first principles: economics (scarcity, value exchange), human psychology (attention, desire), and technology (compute costs, data gravity).
No buzzwords allowed. First, clarify what "fundamental facts" mean in this context:
- All businesses survive by creating more value than they capture (economic surplus)
- AI changes the cost structure of creation but not human needs (Maslow still applies)
- Attention is the ultimate scarce resource (content saturation)
- True differentiation requires systemic advantages (moats)
The user might be underestimating two things:
1) How hard distribution is (building an audience is harder than building tech)
2) The commoditization risk of pure AI wrappers
I should emphasize defensibility—things like proprietary data loops or community ecosystems. Also warn against over-relying on APIs; they're cost leaks and single points of failure.
For content specifically, "wonderful" is subjective. They'll need to define their niche's emotional signature—is it wonder? Trust? Delight? The business model flows from that.
Biggest gap in their question: who are they serving? Without a specific audience, even extraordinary tech floats in vacuum. I'll push them to name early adopters. The response should feel like an architect's blueprint—not just features but load-bearing structures.
I'll organize around scarcity creation since that's the core of all sustainable business. ...time to reframe AI not as the product but as the enabler for human-centric value. The real product is always the change you create for people.
Answer
1. Start with Irreducible Truths (The Foundation)
Scarcity Drives Value:
AI can generate infinite "good enough" content/apps. Your edge? Create uniquely scarce value:Deep personalization (e.g., AI tutors adapting to neurodiversity),
Proprietary data (e.g., fine-tuned on niche datasets like rare medical images),
Human-AI symbiosis (e.g., AI drafts, humans add emotional depth).
Marginal Costs Rule:
AI reduces creation costs → Scale where humans can’t. Focus on high-margin, high-labor-cost domains:Custom video production (reduce $10k shoots to $100 AI workflows),
Legal/contract analysis (cut 80% of lawyer review time).
Attention is Finite:
Flooded markets favor extreme curation or addictive utility.
→ Build: Hyper-personalized content engines (e.g., "Spotify Discover Weekly for indie films").
2. Defensibility: Survive the Coming AI Commoditization
Data Moats:
Own closed-loop data systems (e.g., user feedback → model improvement → better UX → more users).
Example: Runway ML’s video tools improve via artist-generated content.Vertical Specialization:
Generic AI (ChatGPT) wins breadth; you win depth.
→ Target: "AI for precision fermentation biotech R&D" not "AI for business."Own the Workflow:
Embed into critical processes (e.g., Notion AI for workflows, not just text generation).
3. Content-Centric Truths
Distribution > Creation:
Building an audience is harder than building an AI content tool.
→ Strategy: Use AI to fuel owned channels (e.g., "The Browser Company’s AI-curated newsletter").Emotion Beats Information:
AI floods the world with facts; win with storytelling alchemy:Use AI for ideation/hyper-personalization,
Humans craft narrative soul (e.g., AI-generated characters with human-written arcs).
Trust is the Ultimate Currency:
Label AI use transparently. Example: "This poem was co-created with AI" → builds authenticity.
4. Execution: Leveraging Core AI Realities
Compute is Your Capex:
Optimize models for inference efficiency (e.g., quantized 7B models > massive APIs).Avoid the "Wrapper" Trap:
If your USP is a thin UI over GPT-4, you’re fungible. Add:Unique data fine-tuning,
Domain-specific constraints (e.g., "screenplay structure enforcer"),
Own IP (e.g., copyrighted character universes).
Regulatory Arbitrage:
Build for EU’s AI Act compliance early (e.g., explainability tools for healthcare).
5. Launch Strategy: First-Principles GTM
Find Painful Workflows:
Target industries where labor costs dominate (e.g., architecture, animation).Offer "10x" Time Savings:
Use AI to cut steps (e.g., Luma AI’s 3D scans replace photogrammetry labs).Pilot with Design Partners:
Co-create with users → turns them into evangelists.Scale via Network Effects:
Example: Generate.style (AI fashion) lets users sell designs → ecosystem growth.
6. Key Pitfalls (Based on Reality)
Tech Dependency Risk:
Relying solely on OpenAI? Fine-tune open-source alternatives (Llama/Mistral) as backup.Ethical Debt:
AI plagiarism lawsuits are rising → license training data or use synthetic/owned data.Human Bottlenecks:
AI generates 100 drafts; humans curate 1. Staff accordingly.
Where to Start Today
Pick a Micro-Niche:
e.g., "AI-generated custom children’s books with kid’s drawings as input."Build a Proprietary Data Flywheel:
Users submit drawings → AI creates stories → New drawings improve model.Monetize via Emotion:
Parents pay $49/book for personalized magic → virality via social sharing.
"The best AI businesses don’t replace humans—they expand the space of what creators can do. Focus on enabling new forms of expression, not just optimizing old ones."
This approach transforms AI from a novelty into a scalable engine for human creativity, grounded in economic and technological realities. Test ruthlessly, own your data, and always charge for scarcity.
Pretty great by DeepSeek right? Impressive. I think it’s super cool that they share how their AI reasons. It is wonderful to see this because it is like gaining a lens into the human mind.
Your next step: sharpen your reasoning skills
The point here is simple. When an AI has the space to reason, and sequentially build one statement on top of the on next, it can better give us results. The same is true for real life programming and business building. When we can hold the truths about the objects in front of us in our minds, we create better judgement, cognition, and power over our thoughts and results.
So, in order to get better and stronger and more accurate… to hit your targets with ease, you should sharpen your ability to reason. The best way to do this is to state all the truths that you know, understand the pathways before you, and use logic to arrive at the best possible solution.
“Simple can be harder than complex; you have to work hard to get your thinking clean to make it simple.” - Steve Jobs
See you later
Here’s how I like to think about building systems:
What are the requirements of the project?
What are the fundamental components of knowledge I need to build this up?
Where are the gaps in my knowledge?
Ok, now that I have a formal understanding of how these structures formally and functionally operate, let me think about different design pathways I can take.
I then go on to build some of these design pathways, by using the knowledge I have about how these systems work.
I help companies build out powerful new AI applications. In my engagements, I force you to think about how your systems currently operate. I force you to redesign the architecture of your business model with these new technologies. If you want to work with me, subscribe and send me a message. I’m happy to help you reimagine your AI strategy. Both personally, and professionally.
Let’s keep diving deeper on AI, logic, and new paradigms of thought.