
The Model That Remembers My Dad
Why the future of AI isn’t just smart. It’s personal.
My dad has Parkinson’s. His memory is starting to slip — and with it, his independence.
He also has stage 4 bone cancer.
For months, I’ve been thinking about how to preserve a part of him — not as a static recording, but as something interactive. Familiar. Present. I have an idea for a project called mem.ry — a small language model that could live locally on his device. One that helps him remember details, walk through daily steps, or simply carry on a familiar conversation.
Maybe one day he shares access with me. Maybe it lets me hear him. Or just maybe it lets him share his perspective with me long after he’s gone.
It’s not a product… yet.
But this where my mind goes when I think about potential uses of AI at the edge.
Not just smarter tools — more human ones.
What Is an SLM?
A Small Language Model (SLM) is exactly what it sounds like:
A compact AI model — often a fraction of the size of GPT-style giants — that can run locally on edge devices like phones, tablets, laptops, and wearables.
Think: fast, private, fine-tunable, and specialized.
In creative work, SLMs open up a whole new class of possibilities:
- A songwriter with a lyric assistant trained on their personal style
- A designer with a model that understands their visual taste
- A field journalist with an offline summarization engine in their pocket
- A game developer with a character dialogue engine that runs on-device
And for those building tools?
SLMs are the difference between being a user and being a platform.
Why Small Beats Big (Sometimes)
Large models are great at generalization. They know a little bit about everything.
But creative professionals don’t want generic. They want voice. Taste. Rhythm. Identity.
And that’s where SLMs shine.
You don’t need a bajillion dials crammed into a model’s brain to write like yourself. You need something smaller, sharper — and yours.
You don’t need a massive server-sized model to:
- Echo your phrasing across 200 emails
- Generate character dialogue that matches your story universe
- Summarize your meeting notes in your own tone
- Draft lyrics that mirror your songwriting cadence
What you need is something smaller, faster, and closer to you.
And maybe most importantly?
You need something you control.
Creators Don’t Just Need Tools—They Need Models
In a previous article, I wrote:
“When prompts become the pipeline, your creative fingerprint becomes infrastructure.”
I wasn’t speaking metaphorically.
As AI systems continue to compress workflows, the real value for creatives will come from encoding their own decisions — not just asking for new ones.
Imagine owning a model that knows:
- How you resolve harmony
- The visual symmetry you gravitate toward
- The pacing you like in dialogue
- The sentence structures you favor in storytelling
That’s not a tool – That’s a collaborator.
And the good news? You don’t need a supercomputer to build it.
The Edge Advantage
Running models on-device isn’t just about speed. It’s about:
- Privacy: Your data doesn’t need to leave your hands.
- Portability: Your model travels with you.
- Personalization: Fine-tune it to your quirks and taste.
- Resilience: No Wi-Fi? No problem.
In a world of roaming artists, remote teams, and creative independence?
SLMs at the edge are freedom tech.
Tools like mem.ry wouldn’t work if they had to ping the cloud every 5 seconds. This kind of intimacy demands local intelligence.
Final Thought
Not everyone needs to train a model. But more of us will want to.
Because the moment you stop asking what a tool can do for you — and start shaping how it thinks — everything changes.
And that’s the shift we’re heading toward.
Small models. Local minds. Personal pipelines.
The future of creativity might not come from the cloud. It might come from your own device — and your own voice.
Would you like a follow-up on how to build your first SLM, or examples of creative prompts I’ve used with mine? Want to know more about mem.ry? Let me know in the comments.