Identify The Connections Inside A Deep Research Agent: Systems Architecture Part 3/4
See Your Code as a Living System, Not Just Functions and Files
Systems Architecture can radically transform the way you view the world. The components and functions of every system are connected in some way. It is our job to find out how. Form = the components of a system and their layout. Function = what the system does.
Today, I will demonstrate the valuable concept of Connection in Systems Architecture by leveraging a Deep Research Agent as an example. The deep research agent is in the works and will be delivered to you next week… so we will stay in a conceptual space for now.
The Valuable Output
Through the lens of Systems Architecture, each system produces a Valuable Output. We are going to analyze this Deep Research system through this lens. That is, we are going to see how the Deep Research agent starts with a prompt, and uses its components and functions to create an insightful response. The Valuable Output is the final response.
The System Boundary
For every system we can conceive of, there is something called the system boundary. The system boundary helps us separate what happens inside the system versus what happens in the external world. The important thing to note here is that we will be focusing on the connections between functions and components that happen inside our system.
Connections
Inside of each system however, there are connections. There are formal connections between each part or component of the system and there are functional connections between each of the functions in the system. Let’s divide these connections into two categories: Formal Connections and Functional Connections.
Formal Connections
Each system has an overall form. A car has many subcomponents that make it up. A street delivery robot is composed of wheels and machine learning algorithms and destinations and delivery baskets. Planets are composed of rocks, gases, metals… and many more subcomponents.
Deep Research Agents are formally made up of components of code. We will now review a general set of different types of formal connections:
Physical Connections
Geometric/Spatial Organization
Logical/Data Path Connections
Compositional Connections
The challenge with code is that, unlike in physical systems, forms exist in multiple dimensions simultaneously that are not necessarily visible.
Physical Connections in Code
While code doesn't physically "touch," it creates computational adjacency through:
Import statements - These are the most literal connections, creating hard dependencies between modules
Function calls - The moment one piece of code invokes another, you've created a physical bridge in memory
Shared state - When multiple components reference the same data structure, they become physically bound through memory pointers
Geometric/Spatial Organization
Code's spatial dimension emerges through:
Directory structure - Your
/agents
,/tools
,/prompts
folders create spatial neighborhoodsLayered architecture - Logic Controllers → External Services → Data Access creates horizontal spatial relationships
General proximity - Related objects, functions, and variables clustered in the same file share spatial & relational locality
/deep-research-agent
/frontend (user-facing space)
/backend
/agents (reasoning space)
/tools (capability space)
/memory (storage space)
UI - Finally, we have our GUI. Our Graphical User Interface defines how our user flows through the experience we design. This part is key.
Logical/Data Path Connections:
This is where code's form becomes most apparent:
Dependency injection - Components explicitly declare what they need to function
Event streams - Data flows create rivers through your system
API contracts - Define the shape of connections between services
Main app - Defines how everything connects to deliver the user experience
Consider this data path in our agent:
User Query → Preprocessor → Agent Loop → Tool Execution → Memory Storage → Response Synthesis
Each arrow represents a formal connection between each function that must be explicitly coded. The Preprocessor doesn’t just ‘know’ about the agent loop — you must explicitly wire the two in the flow of your application.
Compositional Connections
Here's where it gets interesting. In code, composition creates form through:
Inheritance hierarchies - Base classes define skeletal form that subclasses flesh out
Middleware stacks - Each layer wraps the next, creating an onion-like form
Plugin architectures - Core + extensions create a hub-and-spoke form where the agent is the core with components that are easy to swap.
Our Deep Research Agent's compositional form:
class DeepResearchAgent:
def __init__(self):
self.llm = LLM() # Brain component
self.memory = Memory() # Storage component
self.tools = ToolKit() # Capability component
self.reasoner = ReActFramework() # Logic component
The Critical Insight
What’s interesting about formal connections in code is that dependencies are the form. When you add a new dependency or module to your code, whether it be your own or a library like numpy,
you essentially inject new elements of form into the architecture of your system.
When you look at your code's import statements, do you see mere utilities, or do you see the skeletal structure of your system? Every require()
, every import
, every dependency injection is you as the architect deciding what organs your system will have and how they'll be connected.
While formal connections define the layout of our system—how components connect—functional connections define its physiology—how behaviors combine through that layout to create intelligence.
Functional Connections
Each system has an emergent (main) function that it delivers. The respiratory system delivers breathing. Traffic lights deliver roadway safety. Cars deliver transportation.
However, there are sub-functions within each of these functions. The respiratory system produces oxygen in the lungs, distributes oxygen throughout the blood, and expels CO2 through the mouth. Each one of these functions is connected in a way that makes the emergent function possible: breathing.
In a Deep Research Agent, the emergent function is enhanced knowledge creation. But this emerges from interconnected sub-functions that must work in harmony. Let's map the types of functional connections:
Data Flow
Control Flow
Resource Flow
Feedback Loops
Transformation
Data Flow: The Bloodstream of Information
Data flow is how information moves and transforms through your system.
The System Prompt → Context Cascade:
def initialize_agent(user_query):
# Data flows from system prompt → working memory
system_context = """You are a deep research agent..."""
# This context flows into and shapes EVERY subsequent function
working_context = system_context + user_query
# Context cascades through the entire system
search_context = transform_for_search(working_context)
evaluation_context = transform_for_judgment(working_context)
synthesis_context = transform_for_output(working_context)
The system prompt doesn't just "activate" knowledge—it creates a gravitational field that bends all subsequent data flows toward the research objective.
The Judge Function → Decision Rivers:
# This judgment creates a fork in the data river
def judge_quality(response, threshold=0.7):
# The LLM evaluates the response
# A score is assigned
quality_score = llm.evaluate(response)
if quality_score < threshold:
# Data flows back upstream for enrichment if below threshold
return {"action": "research_more", "gaps": []}
else:
# Data flows downstream to synthesis if above threshold
return {"action": "synthesize", "confidence": quality_score}
The judge function is a control flow switch that can reroute the entire execution path.
Control Flow: The Nervous System of Execution
Control flow determines when, why, and how functions fire. It is the Deep Research Agent’s decision-making system.
async def research_pipeline(query):
# SEQUENTIAL: Initial processing must complete first
processed_query = await preprocess(query)
search_queries = await decompose_query(processed_query)
# PARALLEL: Multiple searches fire simultaneously
search_tasks = [
[search_academic(q),
search_web(q),
search_databases(q)
for q in search_queries]
results = await asyncio.gather(*search_tasks)
# CONVERGENT: All paths merge for judgment
quality_check = await judge_comprehensive(results)
# CONDITIONAL: Control flow branches based on judgment
# NEEDS WORK: we didn't pass the quality check so we rerun the process with a refined query.
if quality_check.needs_more:
return await research_pipeline(quality_check.refined_query)
# Recursive!
else:
# DONE: we passed the quality check! Return results!
return await synthesize(results)
Here the agent makes a decision to re-run its research based on whether or not its results were judged above a specific threshold. The cycle of our system is defined by the different decisions that are made as the functions combine to deliver the golden light at the end of the tunnel: the emergent function.
Resource Flow: The Metabolism of Computation
Every function has a limited set of resources. This includes time, space (memory), and money. For example, you have a limited context window and a limit to how much you want to spend every time you call an AI model.
class ResourceManager:
def __init__(self, max_tokens=100000, max_cost=10.00):
self.token_budget = max_tokens
self.cost_budget = max_cost
self.consumption_log = []
def allocate_resources(self, research_depth):
# Shallow research: 20% of resources
# Deep research: 80% of resources
# If our research is classified as 'shallow', we allocate less
if research_depth == "shallow":
return {
"search_iterations": 2,
"tokens_per_search": 1000,
"parallel_searches": 3
}
# If our research is classified as 'deep', we allocate more
else:
return {
"search_iterations": 10,
"tokens_per_search": 5000,
"parallel_searches": 10
}
Our resource manager allocates how many tokens it wants to spend based on how the user’s query is classified. This is a great example of behind-the-scenes model routing.
Feedback Loops: The Learning Circuits
Feedback loops allow the system to self-correct and improve. It helps the system understand whether it is going in the right direction or not.
Quality Feedback Loop:
class QualityFeedbackLoop:
def __init__(self):
self.quality_history = []
self.threshold = 0.7
def evaluate_and_adjust(self, response):
quality = self.judge(response)
self.quality_history.append(quality)
# Negative feedback: quality too low
if quality < self.threshold:
# System adjusts multiple parameters
self.increase_search_depth()
self.add_verification_step()
self.expand_source_diversity()
# Re-run with adjusted parameters
return self.research_with_new_params()
# Positive feedback: reinforce what worked
else:
self.remember_successful_pattern()
return response
We can design ways that the system can self-correct and guide itself.
Transformation: The Energetic Conversion System
Every function transforms data from one state to another.
class TransformationPipeline:
def transform_query_to_searches(self, user_query):
# Transform: Natural language → Structured searches
# "How do AI agents work?" →
# ["AI agent architecture", "LLM reasoning", "tool use in AI"]
return self.decompose(user_query)
def transform_searches_to_knowledge(self, raw_results):
# Transform: Raw text → Structured knowledge
# "The page says..." → {claim: "...", evidence: "...", confidence: 0.8}
return self.extract_claims(raw_results)
def transform_knowledge_to_synthesis(self, knowledge_graph):
# Transform: Knowledge graph → Narrative
# {nodes: [...], edges: [...]} → "Research shows that..."
return self.generate_narrative(knowledge_graph)
def transform_synthesis_to_insight(self, synthesis):
# Transform: Information → Understanding
# "Studies indicate X" → "This means you should consider Y"
return self.extract_implications(synthesis)
The deep research agent starts with a query, then it turns that query into a set of parallel searches, then it converts those queries into knowledge, then the knowledge gets converted into synthesis, and then that synthesis is converted into a narrative that makes the most sense for the user.
Without each step, there is no final output. There is no value. Each step makes the next step better. Remember, LLMs perform better when they are given time to think.
Conclusion
These formal and functional connections interweave to transform a simple user query into the Valuable Output: enhanced knowledge that didn't exist before. Every import statement, every data flow, every feedback loop contributes to the emergent intelligence of your system. The next time you build an AI agent, remember: you're not just writing code. You're architecting connections that create intelligence and deliver an outcome.
Systems Architecture can transform the way you view the world. Suddenly, every outcome in your life has a set of components and a combination of functions that led to it. Every sale has a trackable set of actions that you took to get to it. Every delicious meal has a set of ingredients and recipes that you combined to create it. Every friendship is composed of a set of experiences and functional relationships that make it a gift that keeps on giving.
Systems Architecture can transform your programs and your life. For more insights and the full code next week, consider subscribing.