#48 - Scaffold, Not Outsource Thinking

Building a Pattern Language for Product Problem-Solving

#48 - Scaffold, Not Outsource Thinking

Hey y'all 👋 how's life? I hope you're doing great!

Let's talk about something we can't ignore anymore: Large Language Models (LLMs). Whether you love them, fear them, or are just curious - it's undeniable they're reshaping how we solve product problems. In this blog post, I want to share how I've integrated LLMs into my product problem-solving workflow and why it's been transformative for me. Hopefully, it'll inspire you to try something similar in your own work.

Some caveats before we begin:

  • ❌ This article is not about how to prompt LLMs so that it can generate better outputs. Prompt engineering can get you pretty far though, so I recommend reading more about it elsewhere.
  • ❌ This article is not about introducing new AI tools or concepts (MCP or Agentic AI or what have you). Good to know. Probably will be outdated soon.
  • ✅ This article is about how I'm working with LLMs to solve product problems better. To leverage LLMs to help scaffold thinking instead of outsourcing it.

But first, let me tell you a story about a particular problem I had to deal with, and how that inspired me to think about pattern language and problem solving. If you just want to get straight to the meat of the process, I suggest skipping to the "A Problem-Solving Process with LLM and Pattern Language" heading.

Inspiration: from confusion to clarity

Code Reuse

At Holistics, our product is built around an as-code philosophy. When users build dashboards, what they're really doing under the hood is writing structured code. That code gives us benefits like version control, programmatic reuse, and composability. For users, we offer a dual-mode interface: edit in UI if you prefer clicks, or switch to code if you want full control.

Dual mode when building dashboards in Holistics

One capability this enables is code-based reuse: a dashboard block can be defined by referencing another block or calling a function that outputs its configuration. Similarly, a block’s position can be determined dynamically via code. These mechanisms unlock flexible reuse, but they also come with risk: if a non-technical user accidentally modifies a reused block through the UI, it could break the logic or silently override what the code intends.

To protect the integrity of these programmatic definitions, we make such blocks read-only in the UI. They show indicators like “Reused Block” or “Position Reused Block,” and we include links to docs explaining why editing is disabled.

"Reused Block" UI Indicator

Reusable Component Library

All of that was working well. Fast forward a bit: I was wrapping up a feature we called the Reusable Component Library - a way for data teams to publish commonly used blocks (like charts or filters) into a browsable library.

These components are parameterized and editable via the UI, making them ideal for casual dashboard builders who want to build reports quickly using standardized charts their teammates have already created. They're different from Holistics' default charts in that they’re built by our customers' own data teams and tailored to their specific needs. The goal of the feature was to help data teams enable their organizations to move faster and avoid reinventing the same charts from scratch.

That’s when we hit a naming wall.

When a user adds a block from the library into their dashboard, is it now a Reused Block? Technically, yes - it’s reusing code under the hood. But Reused Block already has a specific meaning in our product: it refers to blocks defined via code that are rendered read-only in the UI.Maybe we could just call it a Library Block! But then, what exactly is the difference between a Reused Block and a Library Block? The terms sound similar, but they point to very different behaviors, and now we had to figure out how to explain that clearly.

How a clear analysis didn't solve the real problem

I thought I had a clean path forward. If the problem was confusion between these two concepts, maybe we just needed to clarify them. I started mapping out their differences:who they were built for, how discoverable they were, what kind of reuse intent they carried, how easy they were to customize. Admittedly, I didn't think it was a particularly elegant analysis.

It looked like this. I wanted to see all the ways that these blocks are different from one another.

Dimensions Library Blocks Reused Blocks Trade-offs
Primary audience Non-technical or casual dashboard builders. Technical dashboard builders comfortable with code Library: Suited for non-technical or casual dashboard builders

Code-Reuse: Technical dashboard builders comfortable with code
Intent of Reuse Explicit global reuse intent. Local, convenience-based reuse without global reuse intent. Library: Clear reuse intent (meant to be standardized and reused for other dashboards)

Code-Reuse: Faster, low-effort duplication, but unclear intent (may reuse within only one dashboard or adhocly).
Discoverability Highly discoverable via dedicated library interface. Minimal or no discoverability; reuse happens locally in code Library: Ease of browsing standardized blocks.

Code-Reuse: Low discoverability, harder to manage at scale.
Ease of customizability Editable via intuitive UI (updating params) Requires code editing (modify function params in code, or extend) Library: Intuitive UI customization.

Code-Reuse: Flexibility restricted by technical constraints.
Scalability & Maintainability High scalability through standardized components Lower scalability; scattered reuse patterns, more difficult to maintain (compared to library) Library: Easier maintenance at scale.
Code-Reuse: Quick short-term wins, costly long-term maintenance.

So I brought it into a review meeting. A few minutes into presenting, our Chief Architect leaned back and said something that stopped me cold:

"These dimensions are not wrong, but they’re irrelevant.”

I didn’t have a great response - mostly because I didn’t understand why. I had thought the problem was about properly defining the two types of blocks. But as the conversation unfolded, something shifted.

The "Aha moment": From concepts to affordances

Instead of talking about concept definitions or reuse intent, we started listing things users might try to do with it. Can they move the block? Resize it? Change its parameters? Customize it through the UI?

Suddenly, it feels like the fog of confusion has been lifted. These weren’t abstract distinctions anymore. They were tangible capabilities. A block that can’t be dragged behaves differently. A block that accepts parameters offers affordances a static block doesn’t.

That’s when it clicked: the user doesn’t care how a block is defined. They care what they can do with it. That simple shift - focusing on affordances - led to a much simpler solution. We didn’t need new terminology or conceptual categories. We just needed clear indicators: one to show a block is fixed in place, one to show it’s configurable, one to show it came from the library.

Affordances Reused Position Block Code-Reused Block Library Block
Movable or Resizable No Yes Yes
Parameterizable Irrelevant Yes Yes
Discoverable via Library Irrelevant No Yes

Since we're clear on what each type of block affords users to do differently, the solution is just three different UI indicators for three types of blocks. No extra concept definitions needed. I don't have access to Holistics anymore, so unfortunately I can't show you what the indicators look like, but I definitely walked away from the meeting feeling it was a good solution.

But I also felt the need for introspection. Why didn’t I think of that? It wasn’t a novel insight. In fact, I’d encountered and even applied similar reframings before. But in that moment, I couldn’t pattern-match the situation to the approach that would’ve unlocked it.

This frustration wasn’t new to me. It reminded me of how often I've struggled to recall and reuse insights precisely when they're needed most. I realized that, rather than relying on grand frameworks that are hard to recall mid-crisis, maybe I needed smaller, modular ways to capture insights - something simpler and more actionable. This line of thought naturally led me back to Christopher Alexander’s idea of pattern languages' which I'd been intrigued by but hadn't yet systematically applied.

A Problem-Solving Process with LLM and Pattern Language

Why pattern language matters in product management

So, I did what I often do when I need to process something: I had a conversation with ChatGPT. I poured in the entire story - my problem context, the mental model I tried to use, the elegant but ultimately unhelpful framework I brought to the table, the unexpected turn during the meeting, and the simple UI solution we landed on. It was mostly therapeutic. When I’m unsettled, talking to ChatGPT feels surprisingly calming. You can ramble. It listens.

At the end of the conversation, I did something on a whim: I asked it to synthesize the experience into a set of problem-solving patterns.

Why? Partly because I’ve always had a lingering fascination with Christopher Alexander’s pattern language - not just as a design framework, but as a way of thinking. When you spend enough time thinking about a concept, you start seeking connections. I think that’s what happened. And as I waited for ChatGPT’s response, it dawned on me - there was something interesting here.

Alexander’s original idea of a pattern language was deeply rooted in architecture: small, recurring patterns that solve design problems in particular contexts. The software world took that and gave us design patterns like Singleton and Factory (if you're not a dev, please ignore) - tools for thinking in modular, reusable ways. But in product management, most of what we reach for are large, self-contained frameworks - Double Diamond, JTBD, Lean Startup, etc.

And here’s the thing: those frameworks are valuable, but they’re too grand when you’re deep in a messy, tactical, real-world problem. They don’t flex easily. If you're skilled, you can deconstruct them into smaller, more modular mental moves - but we rarely talk about those moves on their own terms. They’re scattered, often hidden, and tied up in a narrative that’s too big to conjure up mid-crisis.

Take this for example: I was once working on a feature that had stalled because the team couldn’t agree whether to explore multiple divergent ideas or just iterate quickly on a chosen one. One PM was arguing for “moving fast and testing” in true Lean Startup fashion. Another wanted to go “wide then narrow” per the Double Diamond. We were stuck - until someone casually asked, “What would be the riskiest assumption if we committed to this version?”

That single move - pulling out the “riskiest assumption” idea from Lean Startup - suddenly gave us clarity. But then we followed it with another micro-move: framing that assumption as a Job Story. That came straight from JTBD. By defining what users were trying to accomplish in a particular format that's independent from their current solutions, we realized the version we were debating didn’t even speak to the real job. We threw it out and started over.

You may wonder how we pulled that move out, I don't have a good answer other than instincts, or expertise. But my point is that, we usually think in terms of JTBD and Lean Startup, rather than in terms of Job Stories and Riskiest Assumptions. These smaller abstractions/mental models are more useful and practical. You can only think about high-level abstractions such as strategy or vision when you've mastered lower-level abstractions. Otherwise, you risk saying vague things that are only loosely connected to reality

We usually only talk about frameworks in the abstract. What I found missing was a pattern language for problem-solving in product work. I'm talking about a loosely connected garden of recognizable moves - each shaped by real-world product experience, each usable on its own, each combinable when needed. You don’t need a prescribed order. Just a sense of what fits, given where you are.

If you're tracing my thoughts, a reasonable objection might be: “But problem-solving in product is too context-specific. There’s no universal pattern. If we attempt to abstract too much, we'd probably end up with frameworks and methodologies again. ” That’s true - and also beside the point that I want to make regarding patterns. Alexander never intended for pattern languages to be universal.

“Barns in any given Swiss valley are similar, as are all alpine barns... Each is a little different because of where it is located and each farmer’s particular needs... Therefore, farmers do not copy particular barns. Alexander says that each farmer is copying a set of patterns which have evolved to solve the Swiss-valley-barn problem.”
*(Patterns of Software, p. 48)
A Swiss alpine barn

In Patterns of Software, Richard Gabriel makes this point explicit: patterns are only effective within specific contexts, and they evolve to solve the problems of particular communities - just like the barns in a Swiss valley. They’re not abstracted systems you apply wholesale. They're situated moves that work because they’ve been shaped by shared needs. That’s why the goal isn’t to create a universal playbook for product problem-solving - it’s to grow a pattern language grounded in the specific situations I encounter.

Using LLM to build and reuse patterns

But even as appealing as the concept of a personal pattern language sounded good, there was still one problem: practical extraction. How do I reliably and quickly pull out meaningful patterns from my experiences? Writing reflections helped, but it wasn't systematic enough. I needed something more structured, something interactive that could push my thinking further. That's when I thought of using LLMs - not just as solution generators, but as collaborators who could help me identify and articulate these patterns clearly

So I began to experiment.

  1. First, I would do the work. Sit with a thorny problem. Write down my early framings, constraints, failed approaches, and aha moments - in Obsidian, mostly stream-of-consciousness.
  2. Then I’d bring all of that into a ChatGPT conversation. Not to ask for a solution, but to reflect on the thinking process: What lens helped me break through? What framing misled me? What could be extracted, reused, named?
  3. I’d ask ChatGPT to synthesize the conversation into a draft pattern or two: a short write-up, contextualized, actionable, and named.
  4. I’d create a new GPT Project workspace and upload these patterns as documents, so ChatGPT could refer to them in future problem-solving conversations.
  5. And as new problems emerged, I’d repeat the process - starting again with my messy notes, but now guided by a growing internal and external language of patterns I could reach for. I'd ask ChatGPT to identify relevant patterns for this situation and apply them to uncover new perspectives and framings.
An overview of my current problem-solving process with LLM and Pattern language

It worked wonder. I'm not easily impressed, but I was pleasantly surprised with how well this process has been working so far. The patterns became scaffolding - not constraints. They helped me see new perspectives, and explain my thinking more clearly to others. And over time, I started to trust not just the outcomes, but the process itself.

What’s a Pattern, Really?

The term pattern here doesn’t mean a trend or a template. It comes from the work of architect Christopher Alexander, who coined the idea of a pattern language to describe design solutions that work reliably in recurring contexts. A pattern isn’t just a best practice - it captures a problem you keep running into, the forces that make it tricky, and a solution that balances those forces. Each pattern is a compact nugget of design wisdom, grounded in observation and experience.

Here’s an everyday example:
Pattern: Light on Two Sides of Every Room

  • Context: You’re designing a room for human use.
  • Problem: How do you make the space feel naturally pleasant?
  • Forces: A single window creates harsh shadows and uneven light.
  • Solution: Put windows on two walls. Light from multiple angles softens the space and lifts the mood.
  • Consequences: Rooms feel calmer, more usable, and emotionally inviting.

What’s beautiful about patterns is that they’re context-aware and actionable. They don’t dictate form. They guide judgment. When applied in the right situation, a good pattern saves you from fumbling through trial-and-error. It names what’s happening, and shows a direction that’s been tested before. That’s what I’m trying to do here: name and reuse patterns I’ve found while solving product problems with LLMs.

What does my pattern-based problem solving setup looks like with LLM?

A pattern in the pattern language

For example, from the story above, I extracted this pattern that has helped me many times since then.

At the start, every pattern sets the stage with Context. This is situational scaffolding. Patterns don’t exist in a vacuum. The context names the moment: a design conversation has gone vague, a team is tempted to scale prematurely, or a solution feels satisfying but isn’t grounded. It’s the first test of relevance. If the context resonates, the rest of the pattern is worth reading.

Then comes the Problem and Forces. This is where the tension lives. The problem captures the core challenge - the decision-point, the trap, the ambiguity. The forces are what make it non-trivial. They lay bare the competing instincts, social pressures, technical trade-offs, or cognitive distortions at play. The pattern becomes relatable not just because of what it solves, but because it names what’s pulling you off course.

Finally, the Solution and Consequences show the move and what it unlocks. But this isn’t a checklist or prescription. It’s usually a posture, a lens, or a principle that clarifies the path forward, often with a concrete example that grounds it. The best patterns also reveal the tradeoffs: what it gains, what it risks, and how it might land differently in practice. Together, these parts form a compact, situational logic - giving you not only a tool, but a sense of when and how to wield it.

The pattern language

I stored my patterns in a folder called pattern language in Obsidian. As mentioned, these patterns are extracted from actual problems, situations and struggles that I personally encounter when solving product problems. They're not from publicly available frameworks or anything like that.

The ChatGPT Project

In ChatGPT, I have Project with these patterns as context. ChatGPT has a limitation of 20 files uploaded per project, so you can use Claude as an alternative (or any of your favorite LLM apps that support the Project concept).

Instructions for ChatGPT

I also put the text below as an instruction in my ChatGPT Project. These instructions are crucial if you want to avoid telling ChatGPT to pay attention to the same thing over and over again.

- Help me solve problems using the pattern language.
- Since we're dealing with pattern languages, wording and vocabulary matter. Please help me keep an eye on this.
- My goal is to solve problems, and also gradually build up a set of pattern language for problem-solving grounded in my actual problem-solving experience, situated mostly within Product Management and Startups context. Therefore, please pay attention to not only solving the problem at hand, but also how this experience can be turned into pattern which can be integrated into our pattern language. 
- Pay attention to salient features of experiences, but try to avoid making things up or stretching too much.
- Be direct, helpful and try to stay at the right abstraction level.
- Don't try to overgeneralize or apply patterns that are irrelevant to the context at hand.
- At some point during any conversation, I'd probably ask you to turn our conversation into a pattern, always pay attention to how this would fit into our existing pattern language. If there are potential duplicates, or there are opportunities to group/organize patterns together, explicitly let me know.
- Explicitly state relevant patterns when helping me solve a problem.

How LLMs approach problem-solving using the pattern language

There was this time I needed to refine some UI copy in the app, and my teammates had shared some mixed feedback. So I turned to ChatGPT to think it through. In the past, it might’ve just thrown out a bunch of suggestions - some decent, some not - but without much grounding. But because I already had a pattern language to work with, the conversation went differently.

Instead of guessing, ChatGPT pulled out relevant patterns, explained why they applied, and used them to reason through the issue. That gave me a much clearer lens to look at the problem. I could pause, think about how I’d apply those patterns myself, and even come up with my own version. Then I’d compare what I thought with ChatGPT’s take - sometimes combining the two, sometimes spotting a better idea entirely.

In another case, I was setting up a demo in Holistics to showcase a new feature we were launching (watch here if you're interested). I fed ChatGPT a bunch of context - my current thinking, the broader goals, the constraints - and we brainstormed together. The conversation really helped me nail down the technical setup of the demo. It clarified what to show, how to structure the flow, and what details to highlight.

The narrative part, though, that still took more back-and-forth. I had to keep working with ChatGPT to shape a story that made sense and landed well. But overall, the process made it easier to move from a rough idea to something tangible and shareable.

How LLM extracted a new pattern

After successfully determining the technical elements of the demo video using existing patterns, I had to work with ChatGPT a bit more to get the narrative right. After I arrived at a satisfying narrative structure, I asked ChatGPT to synthesize an emerged pattern from the conversation. I haven't had a chance to apply this pattern again, because I haven't had to record a new demo since, but I can think of several situations where this pattern might be useful.

As I began to rely more on LLMs to identify patterns, I realized that underlying my workflow were implicit choices and trade-offs about how I wanted to think, collaborate, and learn. It was about shaping a reliable approach to product thinking rather than just efficiency. To make my approach clearer and easier to refine, I want to explicitly define these guiding principles.

Three principles guiding my LLM problem-solving proces

On design principles

Wether you agree with the Agile Manifesto or not, I think its principles are very clear and easy to understand. Each principle effectively chooses one thing over another, and then proceeds to explain why. I also think it represents how good designers approach a problem: by specifying the constraints by which they will impose on their work.

Another example: at Holistics, there's a principle called "Simple things should be simple, complex things should be possible" which manifests in how we shape our feature. You should be able to generate simple reports (a bar chart) quickly, but the product should also allow you to get more complex analysis done as needed (a cohort retention chart, though I'm sure my ex-colleagues would point out that "It's not that complex!").

You need to specify the trade-offs that you're willing tro make. Reflecting on these experiences, I noticed some recurring trade-offs in how I approached product problems and worked with LLMs.

Principle 1: Cognitive Scaffolding over Cognitive Outsourcing

When I use an LLM, I’m not looking for a ready-made solution. What I’m really trying to do is think better.

That might sound obvious, but it’s surprisingly easy to fall into lazy thinking when you’re chatting with a model. The interface is friendly. You type a vague question, it gives you a long, polished response. But if you take that answer at face value - without doing the thinking yourself - it often crumbles under pressure. And as a PM, you can’t afford to pitch ideas you don’t fully understand or believe in (I mean, you can, but you'd be in a world of hurt).

That’s why I treat LLMs as scaffolding, not shortcuts. I bring my thoughts, constraints, and confusions into the conversation. I let the model push on them, organize them, reframe them—but I stay in the driver’s seat.

Pattern language plays a big role here. Instead of asking, “What should I do?”, I ask, “What patterns might apply here?. That small shift keeps the LLM from jumping straight to solutions. Each pattern structurally consists of the context section, which LLMs can use to pattern-match against the current problem. It slows the process down in a good way - nudging both of us (me and the model) to think more carefully. Sometimes the patterns it suggests are spot on. Sometimes I find better ones myself. Either way, I end up with ideas I actually understand and can defend.

Principle 2: Feedback Loops over One-Time Application

A lot of people treat LLMs like vending machines: you ask a question, get an answer, move on. And that's how I'd been for a long time. But nowadays, I treat each interaction is part of a larger feedback loop. Every time I solve a problem, I try to pause and ask: What was the move that worked here? What shift helped me see the problem differently?

When I can name it, I turn it into a pattern. That pattern goes into my growing library - something I can reach for again later. And when I do reuse a pattern in a new situation, I often see it more clearly, or tweak it slightly, or combine it with another one. That, in turn, refines the pattern even more. You have to be judicious at extracting patterns, otherwise you'd end up with a bunch of things you'd never use again. But there lies the beauty also: because you have to consciously choose which patterns enter your language, you can actively participate in this process. It's almost like gardening.

The result? Over time, I’m not just solving individual problems - I’m building a reusable thinking system. Each problem makes the next one easier to approach. And that loop just keeps getting stronger. There's a feedback loop built into the process to enable this kind of continuous improvement.

Principle 3: Personal Patterns over Universal Patterns

I love frameworks like JTBD and the Double Diamond. They’re helpful. But the truth is, the most useful ideas I reach for didn’t come from books or courses. They came from moments of frustration, late-night thinking, confusing meetings, scrappy demos - the messy stuff of real product work.

That’s why the pattern language I’m building isn’t meant to be universal. It’s not a generic toolkit. It’s tuned to my problems: naming UI components, crafting demo narratives, writing copy that actually makes sense. It reflects my particular contexts, constraints, and instincts.

And that’s the point. A Growth PM at a FAANG company is going to face a very different set of challenges than I do. Their pattern language will evolve differently-and it should. The patterns I’m collecting have become something of a personal edge. They help me move faster, think more clearly, and explain my decisions with confidence. Not because they’re objectively correct, but because they’re grounded in what I’ve actually lived through.

I’m not trying to create a framework for everyone. I’m building a language for myself. And honestly, that’s what makes it work.

Conclusion

So that’s the process I’ve been using - and it’s been surprisingly helpful. Working with LLMs this way isn’t about getting instant answers. It’s about thinking more clearly, spotting patterns in how we solve problems, and turning those patterns into something we can reuse. Over time, it’s helped me build a personal toolkit that actually reflects the kind of product challenges I face every day.

If any of this resonates, I’d encourage you to try it out. Next time you run into a tricky problem at work, don’t just reach for a framework. Try writing down what’s really going on. Chat with an LLM - not to solve it for you, but to help you think it through. Then ask yourself: Was there a move in there I could name? Something worth reusing?

Start small. One pattern at a time. And if you do give it a shot, I’d love to hear how it goes.