Kelvin Kaifung Chan

I'm co-founder at IRL, building tools that connect us with intelligence — whether it's artificial or organic.

Some things I'm currently working on:

MONOid
Plan the week and review the week in one space across your domains, synced from the tools you already use.

WyrdOS
A calm opinionated operating system for teams who want clarity before velocity: ethos, principles, goals, habits, and projects in one legible stack.

Pilfer
Treats spend as a design decision: capture intent before you buy, and keep scouting, assets, and runway in context instead of losing it to spreadsheets.

Proximity
A spatial research surface that turns Are.na blocks into multiplayer grids for public thinking.

Questions

Some questions I find interesting at the moment, aside from those related directly to IRL and what we're building.

How does a self-taught approach shape technical decisions and engineering culture?

Without formal training, engineers often develop judgment through building, learning to prioritise, adapt, and ask better questions. This can lead to faster iteration and stronger product intuition, but it may also introduce risks: unclear abstractions, inconsistent standards, and cultural gaps around documentation, review, and long-term maintenance. What values emerge when intuition, not curriculum, guides the work?

How can agents work alongside teams in a way that maximises impact instead of creating more work?

Agents need to sit next to human teams and amplify what people already do, not add coordination debt, rework, or endless clean-up. For that to happen, systems have to be cross-contextual: able to carry meaning across tools, roles, and histories so automation augments judgment instead of fighting opaque hand-offs. What would it take for agent-assisted work to feel like leverage, not overhead?

What role does ambiguity play in technical work?

Engineering often pushes toward clarity, precision, and closure. But much of product development, especially early on, requires holding multiple possibilities open. Ambiguity can be productive, even necessary. Where is it helpful, and where does it become a liability?

Are the mindsets required for finding product–market fit and scaling fundamentally opposed?

Product–market fit demands exploration: speed, openness, and a willingness to work with ambiguity. Scaling requires optimisation: structure, discipline, and a bias toward repeatability. These aren't just different stages, they often rely on different ways of thinking. What does it mean to bring these logics together? Are compromises inevitable, or can both mindsets coexist without one diluting the other?