Building Agents? Watch Memento

LLMs sound like humans – so we often end up instructing them as if they experience the world like us.

But there’s a subtle difference – especially when used as Agents.

πŸ‘€ Humans experience a continuous stream of input and reasoning.

We build tiny hypotheses along the way:

β€œLet me hover over the tooltip to see what this button is for.”

It’s a loop of sense β†’ reason β†’ act, in continuity.

🧠 Agents, on the other hand, live in snapshots:

See screen β†’ Decide β†’ Act β†’ See new screen.

They’re like a human who:

  • Looks at the screen
  • Writes a letter to a controller to perform an action
  • Closes their eyes while it’s happening ← VERY IMPORTANT
  • Opens their eyes to a new scene – with no memory of the past

The only continuity? πŸ“

A notepad on the table – a few scribbled notes before they β€œπ‘π‘™π‘Žπ‘π‘˜π‘’π‘‘ π‘œπ‘’π‘‘.”

So we asked ourselves:

β€œIf this were me, how would I use that notepad?”

We’d been giving agents summaries of prior steps – but something was still missing.

So we made a small tweak to the prompt:

πŸ‘‰ β€œWrite a note to your future self”

Result: the agent now jots down whatever it wants its future self to know, such as:

  • What hypothesis it’s testing
  • Why it chose this action
  • What to look for in the new state

So in the next iteration when it wakes up, it knows: β€œWhat was I thinking?”

That single line β€” β€œWrite a note to your future self” β€”

gave our agent a memory-like thread.

A small change. A big leap in clarity and navigation. πŸš€

#AI #Agents #LLM #StartUp #BuildInPublic #AgenticAI

Share the Post:

Related Posts