LLMs sound like humans β so we often end up instructing them as if they experience the world like us.
But thereβs a subtle difference β especially when used as Agents.
π Humans experience a continuous stream of input and reasoning.
We build tiny hypotheses along the way:
βLet me hover over the tooltip to see what this button is for.β
Itβs a loop of sense β reason β act, in continuity.
π§ Agents, on the other hand, live in snapshots:
See screen β Decide β Act β See new screen.

Theyβre like a human who:
- Looks at the screen
- Writes a letter to a controller to perform an action
- Closes their eyes while itβs happening β VERY IMPORTANT
- Opens their eyes to a new scene β with no memory of the past
The only continuity? π
A notepad on the table β a few scribbled notes before they βπππππππ ππ’π‘.β
So we asked ourselves:
βIf this were me, how would I use that notepad?β
Weβd been giving agents summaries of prior steps β but something was still missing.
So we made a small tweak to the prompt:
π βWrite a note to your future selfβ
Result: the agent now jots down whatever it wants its future self to know, such as:
- What hypothesis itβs testing
- Why it chose this action
- What to look for in the new state
So in the next iteration when it wakes up, it knows: βWhat was I thinking?β
That single line β βWrite a note to your future selfβ β
gave our agent a memory-like thread.
A small change. A big leap in clarity and navigation. π
#AI #Agents #LLM #StartUp #BuildInPublic #AgenticAI

