So, it was Sunday night, and after putting our toddler to sleep, I was finally able to sit down and experiment with Claude Code to build an Agentic Platform. I’ve been a long-time lurker on Hacker News and had heard great things about Claude Code, so I just had to try it. As a big Claude user, I signed up for Claude Code in the Developer Console, set up my credit card with $20 in credits, and was ready to go.

I was intrigued by the feasibility of building an agentic platform—a basic Chat UI where you can select an agent and start chatting. You can create your own agent by passing a prompt and a list of skills.

With a bit of help from Claude Chat, I was able to design a simple architecture for the platform.

graph TD ClientApp[Client App] APILayer[Simple API Layer] AgentLifecycle[Agent Lifecycle Service] AgentService[Agent Service] SkillService[Skill Service] Redis[Redis Storage] ClientApp --> APILayer APILayer --> AgentLifecycle APILayer --> AgentService APILayer --> SkillService AgentLifecycle --> AgentService AgentService --> SkillService SkillService --> Redis

After setting up the Claude Code CLI on my local machine, I was able to prompt it to build a scaffolding of the platform while referencing the CLAUDE.md file in the repository. To my surprise—and at the cost of an additional $10 worth of prompting—it was able to generate a working API-based application, triggering the skills.

For web search and LLM functionalities, I did some cross-referencing and ultimately chose SerpAPI for web search and Groq for LLM and related skills and they have generous free tiers.

During prompting, I made sure to specify the use of LangGraph for the Agent Service. Furthermore, from an operational simplicity perspective, I asked it to Dockerize the entire service in one repository. This will be useful later.

:bulb: Tips to operate with Claude Code

  1. Familiarize yourself with https://www.anthropic.com/engineering/claude-code-best-practices
  2. Use the –verbose flag to get more details about the prompt
  3. To reduce your token usage, ensure you don’t ask it to generate tests or write comments or verbose documentation. This will save you some credits but at the cost of figuring out the code
  4. Develop the system iteratively and prompt by prompt e.g
    1. Initialize Your Project Structure
    2. Implement Shared Components
    3. Setup Redis Integration
    4. Implement the Skill Service
    5. Implement the Agent Lifecycle Service
    6. Implement the Agent Service
    7. Implement the API Service
    8. Create Main Application Entry Point
    9. Add a README
    10. Test the application

Here’s the CLAUDE.md file I used to generate the scaffolding.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
# CLAUDE.md - Agentic Platform MVP Development Guide

## Project Overview

This project is an MVP for an agentic platform with the following core components:
1. Agent Lifecycle Service
2. Agent Service
3. Skill Service
4. Redis for experiential memory
5. Simple API layer for user interactions

## Development Standards

### Technology Stack
- Python 3.11+
- LangGraph for agent workflow orchestration
- FastAPI for API endpoints
- Redis for memory storage and state management
- LangChain for some skill integrations

### Coding Standards
- Use type hints throughout the codebase
- Create modular components
- Follow PEP 8 conventions
- Use async/await pattern where appropriate for API endpoints

### Project Structure
```
agentic-platform/
├── services/
│   ├── agent_lifecycle/
│   ├── agent_service/
│   ├── skill_service/
│   └── api/
├── shared/
│   ├── models/
│   ├── utils/
│   └── config/
├── docker-compose.yml
├── requirements.txt
└── README.md
```

## Implementation Details

### Core Service Models

#### Agent Model
```python
class Agent:
    agent_id: str  # Unique identifier
    name: str  # Human-readable name
    description: str  # Purpose description
    status: str  # "active", "inactive", etc.
    skills: List[str]  # Skill IDs the agent can use
    config: Dict[str, Any]  # Configuration parameters
```

#### Skill Model
```python
class Skill:
    skill_id: str  # Unique identifier
    name: str  # Human-readable name
    description: str  # What the skill does
    parameters: List[Dict]  # Required and optional parameters
    response_format: Dict  # Expected response structure
```

#### Conversation Model
```python
class Message:
    id: str
    role: str  # "user" or "agent"
    content: str
    timestamp: datetime
    metadata: Dict[str, Any]  # Optional metadata

class Conversation:
    id: str
    agent_id: str
    user_id: str
    messages: List[Message]
    status: str  # "active", "completed", etc.
    created_at: datetime
    updated_at: datetime
    metadata: Dict[str, Any]  # Optional metadata
```

### LangGraph Implementation

Use LangGraph for implementing agent workflows. LangGraph helps create structured, stateful agent workflows, particularly suitable for this MVP. Specifically:

1. Define agent state and transitions
2. Create nodes for different agent operations (reasoning, skill execution, etc.)
3. Define the graph structure connecting these nodes
4. Handle conversation context and memory integration

## Service Implementation Guidelines

### 1. Agent Lifecycle Service

Create a service that manages agent registration and status. Implement:

- Agent creation/registration
- Status management (activate/deactivate)
- Configuration storage in Redis
- Simple validation for agent configuration

Use FastAPI for the REST endpoints and Redis for storage. Keep interfaces minimal but sufficient to demonstrate core functionality.

### 2. Agent Service

Implement the core agent runtime with LangGraph. Create:

- A reasoning node that determines actions based on user input
- A skill execution node that calls the Skill Service
- A response formulation node that creates user-facing messages
- State management that tracks conversation context

Focus on a simple but effective reasoning approach for MVP - don't overengineer the cognitive architecture.

### 3. Skill Service

Create a simple skill registry and execution service:

- Implement basic skill registration
- Create skill validation logic
- Develop execution framework
- Implement three core skills:
  - web-search: Use a simple wrapper around a search API using SerpAPI using the following API private key MY_API_KEY
  - summarize-text: Use Claude
  - ask-follow-up: Generate follow-up questions based on context

### 4. Redis Integration

Use Redis for all stateful data:

- Agent configurations
- Active conversations
- Working memory for agents
- Skill execution results

Define clear Redis key structures and data formats. Use Redis data types appropriately (Hashes, Lists, Sets, etc.).

### 5. API Layer

Create a simple but complete API layer with FastAPI:

- User authentication (simplified for MVP)
- Conversation management
- Message sending/receiving
- Agent status queries

## Implementation Sequence

1. Set up the project structure and shared components
2. Implement Redis integration and basic data models
3. Create the Skill Service with 2-3 example skills
4. Develop the Agent Service with LangGraph workflows
5. Implement the Agent Lifecycle service
6. Create the API layer
7. Build a simple frontend (optional for MVP)

## Development Tips

1. Use a local Redis instance for development
2. Create small, focused services that communicate via well-defined APIs
3. Use environment variables for configuration
4. Log agent actions and skill executions for debugging
5. Implement proper error handling from the beginning

## LangGraph-Specific Guidance

When implementing the agent workflow with LangGraph:

1. Use the `StateGraph` class to create the agent's state machine
2. Define clear states like "receiving_input", "reasoning", "executing_skill", "formulating_response"
3. Create typed state classes to maintain type safety
4. Implement conditional edges for dynamic agent behavior
5. Use the async API for better performance
6. Leverage LangGraph's memory interfaces for maintaining context

Example LangGraph structure:
```python
from langgraph.graph import StateGraph
from pydantic import BaseModel, Field

# Define state
class AgentState(BaseModel):
    messages: List[Message] = Field(default_factory=list)
    context: Dict[str, Any] = Field(default_factory=dict)
    current_skill: Optional[str] = None
    skill_results: List[Dict] = Field(default_factory=list)
    
# Create nodes
def reasoning(state: AgentState) -> AgentState:
    # Determine next actions
    # ...
    return updated_state

def execute_skill(state: AgentState) -> AgentState:
    # Call skill service
    # ...
    return updated_state

def formulate_response(state: AgentState) -> AgentState:
    # Create response for user
    # ...
    return updated_state

# Build graph
graph = StateGraph(AgentState)
graph.add_node("reasoning", reasoning)
graph.add_node("execute_skill", execute_skill)
graph.add_node("formulate_response", formulate_response)

# Add edges
graph.add_edge("reasoning", "execute_skill")
graph.add_edge("execute_skill", "formulate_response")
graph.add_edge("formulate_response", "reasoning")

# Conditional branching
def should_execute_skill(state: AgentState) -> str:
    if state.current_skill:
        return "execute_skill"
    return "formulate_response"

graph.add_conditional_edges("reasoning", should_execute_skill, 
                           {"execute_skill": "execute_skill", 
                            "formulate_response": "formulate_response"})

# Compile the graph
agent_executor = graph.compile()
```

## Testing

For the MVP, no unit tests or other tests are required to reduce token consumption


## Deployment

For the MVP, a simple local deployment is sufficient. 

## Documentation

For the MVP, No need to document in detail, keep is simple to reduce token consumption.