Stop Prompting. Start Building: Your First AI App!
Welcome back to HosseiNotes! 🚀
Quick heads-up before we dive in 👀
This issue is very beginner-friendly.
If you’re already shipping AI apps in production, juggling APIs, monitoring token spend, and arguing about latency vs quality…
👉 you can safely skim (or skip) this one 😅
But don’t go far, advanced builds and topics are coming next. Provider abstractions, scaling patterns, SLMs, and real product architecture are right around the corner.
In our last issue of trip advisor, we took our little Trip Advisor and gave it a reality check!
Instead of vibes-only recommendations, we fed it a massive, carefully crafted prompt that forces it to:
🌦️ Check real weather
✈️ Look up actual flight & hotel prices
💸 Stay ruthlessly honest about budgets
No more “Sure, you can visit Paris for €200” fairy tales. 🧚♂️❌
That was fun.
But now… the real pivot begins 👀
From Playground Prompts → Real AI Applications 🏗️
We’re officially leaving the playground.
Prompts are amazing for experimenting and discovering what models can do.
But the second you want to ship something reliable, something that lives inside:
a product
a website
a mobile app
a Slack bot
a dashboard
…you need to connect to the LLM through code, using official provider APIs.
This is where the magic becomes production-grade ✨
The New Paradigm: LLM APIs Unlock Entire Product Categories 🔓
With just a few lines of Python (or JS… or whatever you like), you can now build things that were straight-up impossible before 2023–2024:
✍️ Personalized content engines
Blog posts, emails, product descriptions, social media, all in your brand voice🤖 Intelligent customer support bots
Handling 80% of common questions without human hand-off (support teams rejoice 🙏)🧑💻 Code assistants for dev teams
Reviewing PRs, suggesting fixes, generating boilerplate📊 Data analysis copilots
Ask questions in plain English → get SQL, charts, or business insights🎙️ Voice-enabled assistants
Inside apps, devices, or dashboards🏠 AIoT (AI + Internet of Things) madness
“Make the living room cozy for movie night 🍿” → lights dim, temperature adjusts, blinds close
Or factories where sensor data gets translated into plain-English explanations
These aren’t “just chatbots.”
They’re new product categories that create real value:
⏱️ massive time savings
😍 better UX
📈 higher retention
💰 lower support costs
⚡ faster iteration
And the best part?
You don’t train or host massive models yourself.
You just:
call an API
pay per use (usually per token)
focus on the product magic ✨
Welcome to the API Economy for Intelligence 💡
Every major LLM provider gives you:
🔑 an API key
📊 a dashboard to track usage, costs, and limits
As a business-aware engineer, you need to think about this like any other infrastructure:
cost per token
rate limits
latency
reliability
how usage scales with growth
Because yes…
A viral feature can turn a €50/month hobby into a €5k/month surprise bill 😬
Token accounting matters. A lot.
How It Actually Looks in Code 🧑💻
In our private repo (HosseinCodes), I’ve wired the full working version to OpenAI’s API.
Here’s the core pattern (simplified):
from openai import OpenAI
import os
client = OpenAI(api_key="YOUR OPENAI API KEY")
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a ruthless budget travel advisor... [the full mega-prompt we built last time]"},
{"role": "user", "content": "Weekend trip from Madrid, max €300, romantic vibe"}
],
temperature=0.7,
max_tokens=1200,
)
final_answer = response.choices[0].message.content
print(final_answer)That’s it.
➡️ One API call
➡️ A fully reasoned trip plan
➡️ Real-world constraints baked into the logic
And if you’re new to this, here’s why that single call is so powerful 👇
🧠 system message
This is where you define the role and behavior of the model.
In our case: “You are a ruthless budget travel advisor…”
Think of it as setting the personality, rules, and boundaries once, before the conversation even starts.💬 user message
This is the actual request coming from your user or app UI.
Example: “Weekend trip from Madrid, max €300, romantic vibe”🎭 Roles (
system,user,assistant)
These roles help the model understand who is speaking and why.
They’re critical for keeping responses consistent and avoiding chaos in longer interactions.🎚️ temperature
Controls creativity vs predictability.
Lower = safer, more deterministic answers.
Higher = more creative (and sometimes more chaotic 😅).
For some product features, you’ll want this somewhere in the middle, but for strict, deterministic and repeatable behavior (like classifications, validations, or structured outputs), setting it to0is often the right choice.📏 max_tokens
A hard cap on how long the response can be.
This protects you from runaway outputs and runaway costs 💸.
Put differently:
This isn’t “just sending text to a chatbot.”
You’re programming behavior, controlling risk, cost, and output quality, all from code.
That’s the moment when prompts stop being toys… and start being infrastructure.
It’s Not Just OpenAI 🌍
Another popular option right now (late 2025) is Google’s Gemini API.
Equivalent example:
from google import genai
client = genai.Client(api_key="YOUR GEMINI API KEY")
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="You are a ruthless budget travel advisor... [paste the full prompt here]\n\nUser: Weekend trip from Madrid, max €300, romantic vibe"
)
print(response.text)Other providers, Claude, Grok, and more, also offer their own SDKs.
The process is almost always the same:
get an API key
call a model
send messages
receive structured output
Only the syntax changes slightly.
(And yes, we’ll talk later about more robust approaches like LangChain or LangGraph to manage this cleanly at scale 😉)
Why This Step Changes Everything 🚀
Going from playground prompts → API calls unlocks:
🔁 Full programmability
Loops, retries, caching, multi-step reasoning🔌 Integration power
Databases, frontends, notifications, payments📊 Scalability & monitoring
Track usage, costs, performance💼 Product thinking
You’re no longer demoing, you’re building features users pay for
This is the bridge from
👉 “cool experiment”
to
👉 “viable business feature”
And yeah… right now the repo is locked to one provider.
It works great, until prices change, a better model appears, or you need a cheaper/faster fallback 😏
But don’t stress.
Next week, I’ll show you the simple solution that makes switching providers feel effortless:
one string change
zero rewrite pain
For now:
clone the repo (if you don’t have access yet, just send your GitHub username to notes@hossein.ai)
drop in your API key
run it
watch it plan a real trip
You’ve just graduated from prompt wrangler → AI application builder 🎓🎉
Huge moment. Be proud of it.
See you next time for the “never worry about vendor lock-in again” upgrade.
Stay curious (and watch those token costs 😎)



