Hello World: Intro to LangGraph
Welcome to the very first edition of The Agentic Brief; your guide to designing, deploying, and scaling AI agents in the real world.
If you’re here, you’ve probably noticed the wave of AI products and tools being built on top of powerful language models. But while the hype is real, one question remains: how do you actually go from model → agent → usable app?
That’s what this newsletter is about. Each post will break down how to build practical agentic systems step by step. To kick things off, let’s start with the classic “Hello World” of AI agents: setting up a chatbot using LangGraph (with Gemini), and Streamlit.
By the end of this post, you’ll have a working chatbot running locally, powered by Google’s Gemini model, orchestrated by LangGraph, and wrapped in a simple Streamlit UI.
Why LangGraph?
Most LLM demos stop at “prompt in → response out.” That’s not enough for agents in the real world. Agents need structure: the ability to branch, retry, loop, and maintain state.
LangGraph gives us that structure. It lets us define agents as graphs of nodes (steps) and edges (flows).
Step 1. Project Setup with uv
We’ll use uv, a fast Python package/dependency manager.
Create a new project and add dependencies:
uv init hello-langgraph --python=3.13
cd hello-langgraph
uv add langgraph google-generativeai langchain-google-genai streamlit
You’ll also need an API key for Google’s Gemini. Grab one from Google AI Studio and set it in your environment:
export GOOGLE_API_KEY="your_api_key_here"
Step 2. Create the Chatbot with LangGraph
Here’s a minimal graph that defines a chatbot agent:
# chatbot.py
import os
import google.generativeai as genai
from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.graph import StateGraph, MessagesState, END
# Configure Gemini
genai.configure(api_key=os.environ["GOOGLE_API_KEY"])
llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash")
# Define the graph
graph = StateGraph(MessagesState)
def chat_node(state: MessagesState):
response = llm.invoke(state["messages"])
return {"messages": state["messages"] + [response]}
graph.add_node("chatbot", chat_node)
graph.set_entry_point("chatbot")
graph.set_finish_point("chatbot")
chatbot = graph.compile()
Here’s what’s happening:
We use MessagesState to keep track of the chat history.
A single node (chatbot) sends the user’s message to Gemini and appends the response.
The graph is compiled into a runnable chatbot function.
Step 3. Add a Streamlit Frontend
Now let’s make it interactive with Streamlit.
# app.py
import streamlit as st
from chatbot import chatbot
st.set_page_config(page_title="Agentic Brief Chatbot")
st.title("🤖 Hello World: LangChain + Gemini")
st.write("Your first AI agent with LangGraph!")
if "messages" not in st.session_state:
st.session_state.messages = []
for msg in st.session_state.messages:
with st.chat_message(msg["role"]):
st.markdown(msg["content"])
if prompt := st.chat_input("Ask me anything..."):
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)
response = chatbot.invoke({"messages": st.session_state.messages})
ai_msg = response["messages"][-1]
st.session_state.messages.append({"role": "assistant", "content": ai_msg.content})
with st.chat_message("assistant"):
st.markdown(ai_msg.content)
Run the app:
uv run streamlit run app.py
And you’ll see a chatbot running locally!
The complete code is available here.
What’s Next?
This may look simple, but you’ve just built your first AI agent pipeline:
Gemini handles reasoning and responses.
LangGraph structures the conversation flow.
Streamlit gives you a shareable frontend.
In upcoming posts, we’ll move from chatbots to multi-agent systems, tool-using agents, and eventually production-ready workflows for real-world use cases.
Stay tuned for the next issue, where we’ll dive into adding memory and tools to your agent.
If you enjoyed this, hit subscribe and share The Agentic Brief with someone curious about building agents.