#44 - How I become more thoughtful with AI
Reflections from building and rethinking an Obsidian plugin aimed at helping me think better.

Introduction
A few weeks ago, I proposed a solution to a small but tricky product problem at Holistics. It was a proposal I had reasoned through, refined, and presented with confidence. But it was rejected by our chief architect (not in bad spirit). We eventually landed on a much better and more elegant solution - one that made sense in retrospect.
Yet the experience stayed with me. It wasn’t about being wrong, because that's just part of the learning process. It was about realizing, after the fact, that I had been operating under a set of assumptions I hadn’t fully examined. That kinda bothered me. It also sparked a question that I’ve since been chasing through writing, coding, and experimenting:
This essay is my exploration of that question. Spoiler: I haven't arrived at THE answer yet, but I did get to more promising places in the end.
This is a walk through the messy middle: trying things that almost work, discovering what doesn’t, and slowly forming a better picture of what kind of tool and mindset that might actually help.
Pattern Language as Context for LLM
After the rejection, I did what I often do when I feel the need to unpack a moment: writing. Not polished writing. Just raw, stream-of-consciousness notes trying to make sense of what happened: What was the actual problem I was trying to solve? What led me to believe my initial solution made sense? Why did the better idea work?
This kind of reflective writing has long been part of how I work. But this time, I wanted to do more than reflect. I wanted to extract something reusable from that moment of learning - some way of ensuring that next time, I’d see the problem differently, sooner.
That desire led me to pattern language, an idea I first encountered through the work of architect Christopher Alexander (I also wrote a recent blog post in Vietnamese about this topic). Alexander believed that recurring problems in design could be addressed through reusable, named solutions - what he called patterns. These are not frameworks or best practices. They are patterns of solving problems that emerged from lived experience, grounded in particular contexts but at an abstraction level high enough to apply in many situations.
In my own work, I started thinking: can I build a personal pattern language for solving product problems? Each time I realized something important through reflection, I would try to turn that into a pattern. For example:
- When a solution feels compelling but untested, and the team is rushing toward answers, I remind myself to Ask the Open Question. Instead of proposing a fix, I ask a broad “How might we…” that keeps the conversation open. This is a useful pattern to identify and address hidden assumptions.
- When a technically elegant design feels satisfying but heavy, I apply Prioritize Experience Over Elegance. I ask what version of the experience we can validate without over-investing in clean architecture. As product builders, sometimes treating the problem as achieving technical elegance leads to over-engineering.
- When the team is debating whether to scale a system, I recall Build Value Before You Scale It. The metaphor is simple: zero times a hundred is still zero. (Do Things That Don't Scale). We're often influenced by products that have been scaled to thousands or millions of users, while forgetting that their initial MVPs might have been very focused on delivering concrete value to a small target audience.
Ask The Open Question, Prioritize Experience Over Elegance and Build Value Before You Scale It are patterns in my pattern language. Each pattern consists of descriptions about context, problem, forces, solution and consequences. I feed my stream-of-consciousness writing into LLM and asking it to extract these patterns from experience. So far I have managed to create about ten different patterns across problems such as making product decisions, building a narrative script for demo video, building an MVP, and more.
When used as context, patterns helped LLM give me much better answers. Instead of generic responses, the AI model began to speak in the language of my own thoughts, helping me modify and extend mythinking with patterns extracted from experience. This has been a huge boost in productivity.
It feels like I've discovered a new way of work: start by writing about the problem at hand in an unfiltered manner (stream-of-consciousness style), then ask LLM for relevant patterns that could be applied, go back and forth to refine the answer or take it to a different direction, implement the solution, document the result, observe how it works out, and ask LLM to extract out patterns.
This was exciting. The initial success of applying patterns prompted me to write more, and these writings become inputs for LLM to extract more patterns in return. it's a positive feedback loop.
AI-Assisted Concept Extraction
As I kept writing, I noticed something else. Often, my biggest mistakes didn’t come from poor execution. I can use patterns to help me figure out ways to think about solution,
They came from hidden assumptions - worldviews I didn’t know I was operating under. In the rejected proposal, for example, I had assumed that the problem was conceptual in nature. I didn’t even realize I was assuming it. Only after we explored alternative solutions did I understand that this assumption had shaped my entire framing of the problem. It was unnecessary and constrained the solution space significantly.
This realization led me to another idea:
If somehow LLM can help me identify certain components of my thinking as reflected in the stream-of-consciousness writing, perhaps that would help me be aware of my thinking better.
What are these components exactly? I brainstormed with ChatGPT a simple conceptual model to help structure how I approach any given problem.

I'm sure there are better conceptual models out there. But these are good enough starting points.
At this point, I decided to build a plugin for Obsidian, my daily thinking and note-taking environment. It is where I write most often, and I don't want to implement a separate tool that forces me to switch context from where I usually do my work, so an Obsidian plugin fits the bill. I used Windsurf and Claude Sonnet to assist in development work. To my surprise, I got it to work within a few hours. The plugin could extract components from my writing using OpenAI API and label them.
The first version of the plugin was straightforward. I could highlight a piece of text, send it to an AI model via Obsidian plugin command, and get back a response that tagged different segments as actors, worldviews, facts, or outcomes. These segments were clearly highlighted in the editor. It felt pretty great. This thing I imagined was now real, and it almost worked on the first try.

But as I used it more, the initial excitement slowly faded away. The plugin had two major limitations:
- First, I didn’t know what to do with the extracted components. A segment labeled as “fact” didn’t help me move forward unless I could understand its significance. The label alone had no leverage.
- Second, and more importantly, the AI missed things. Important lines of text were ignored. Subtle shifts in tone or implication were ignored. In one case, the model labeled a text segment where I describe how Holistics works as a worldview.
Applying Patterns to Isolated Concepts
I initially tried to solve the first problem (not knowing what to do with extracted components) first. Since I already had a pattern language, I thought:
For example, if the plugin highlighted a worldview like “We need to scale this now,” maybe I could apply Build Value Before You Scale It and generate a new insight that reframes the underlying assumption that things need to be scalable.
Patterns exist as notes in my Obsidian vault, so this would require me to build a retrieval-augmented system where I could retrieve a pattern from the knowledge base and apply it to a extracted component. The response would show up in a side panel.
Technically, it looks something like this:
- First, we convert patterns (notes in Obsidian) into things that can be retrieved later, this triggers two things:
- Send pattern notes to OpenAI embedding model to extract out vector embeddings. Vector embeddings are essentially how text can be represented in high-dimensional spaces. Text that are similar to one another have coordinates that are closer together. OpenAI already offers an API for creating embeddings from text, so this should be straightforward.
- Store these embeddings into our database. I choose Supabase as my backend technology, which leverages an online Postgres database instance. We store vector embeddings along with other metadata in Supabase, so that we can retrieve them later.
- After patterns are stored as embeddings in the database, we can proceed to use the plugin as follows:
- Select a piece of text.
- Initiate a plugin command to extract out conceptual components.
- Click on a highlighted segment to see all patterns.
- Click on a pattern to apply it to the highlighted segment (conceptual component).
- The result appears on a separate panel.
A quick glance of what applying patterns to extracted components looks like.
Technically, it worked. The pieces connected. But once again, it wasn’t useful.
The problem was not with the patterns themselves, because I have been using them for a while, and they have been immensely useful. The problem was fragmentation. Patterns applied to isolated fragments don’t generate insight.
In general, that’s not how thinking works. A worldview is only meaningful in a particular context. You need to know who holds it, what facts support or contradict it, and what outcomes it’s tied to. If you remove AI from the picture, it should become obvious: when someone gives you a random fact or spouts out a worldview, you'd naturally ask for more context in order to make sense of that information.
Even when the AI got it “right,” the result felt shallow. It didn’t provoke new thinking. It didn’t help me reframe the problem. It just generated a shallow application of the selected pattern that doesn't really get at the heart of the situation. Getting it to produce something useful requires me to input more context in addition to what I've already written. I could spend hours trying to explain how my product works to LLM, but that doesn't really help me make progress on complex problem-solving. If I had to over-explain everything to get the AI to do a decent job, wouldn't that get in the way of thinking better?
Asking LLM To Generate A Conceptual Graph
I changed my approach a bit: if LLM cannot make sense of isolated conceptual components, maybe we should first ask it to see how these conceptual components are related - generating some sort of conceptual graph. If the graph is good enough, then LLM can apply patterns on top of the graph itself, instead of on individual components. The result should be better compared to applying on individual conceptual components.

The problem with this idea was that, LLM didn't generate good conceptual graphs. It suffers from the same problem I talked about earlier:
Second, and more importantly, the AI missed things. Important lines of text were ignored. Subtle shifts in tone or implication were ignored. In one case, the model labeled a text segment where I describe how Holistics works as a worldview.
Not only LLM misses certain nuances in a text, it also misses important relationships between concepts. At this point, I came to the realization that it's probably not a good idea to rely on LLM for conceptual extraction. Taking a step back and examining what I've done so far, I think there's a couple of principles worth discussing about.
Tools Should Scaffold Cognition, Not Outsource It
I want to pause here for a moment and make a distinction between two important concepts that lie at the heart of what I'm doing. In learning science, there’s a distinction between cognitive scaffolding and cognitive outsourcing.
- Scaffolding means building supports that help you think more deeply. Tools in this category invite you to do the work, but help you hold complexity in view.
- Outsourcing is when a tool does the thinking for you. A calculator. Google Maps. Useful, yes - but often at the cost of weakened long-term skill.
For example, consider the use of a calculator, which is a form of outsourcing cognition. The calculator is doing the calculation, while you're just inputing numbers. This frees up your brain energy to do other things. I often use Google Map to navigate, which is also a form of cognitive outsourcing because I don't actually have to think about which ways to go.
I'm not bashing cognitive outsourcing, it's useful. After all, I use Google Map all the time. But on the other hand, I don't know the streets of Saigon nearly as good as my parents whom never had any virtual map to rely on. Cognitive outsourcing can lead to skill atrophy overtime.
Now consider my knowledge base in Obsidian. That's a form of cognitive scaffolding. Neither Obsidian nor the knowledge base itself does the thinking for me. Whenever I want to reason about something, I reach into Obsidian and look for related notes, so that I can leverage embedded knowledge and contextualize the problem at hand. But the process is very cognitively demanding. As a result, my ability to parse both new knowledge and to recall what I already know becomes better.
Or think about how my course business - Breaking into Product Management, works: an intensive period of 1.5 month in which students not only learn fundamental knowledge about Product Management but also put them into practice in the final project. This structure serves as cognitive scaffolds, because mentors don't think for students, we're there to help them apply their own minds. Students do assignments, homework and final projects, and we provide feedback and intervention whenever necessary. As a result, many of our students praise the experience of being highly educational. Cognitive scaffolding can deepen one's skills overtime.

Back to our plugin: when I let the AI analyze my thinking and extract out conceptual components, I was outsourcing. The tool processed my words but left me uninvolved in the reasoning. Over time, this would not sharpen my awareness. It would dull it. That was a violation of my original goal: to become more aware of how I think while solving complex problems.
Taste and Judgment in the Age of AI
There’s another principle I’ve been thinking about lately - one that’s becoming more important as AI makes it easier and faster to build software.
When anyone can generate working code, spin up a prototype in a few minutes, or build an AI-assisted app over the weekend, the bottleneck shifts. The hard part is no longer building something. The hard part is building something that’s actually useful - something that holds up under serious context of use, that people come back to, that sharpens rather than dulls how they think.
That’s where judgment comes in. You can build a prototype under an hour, but then you have to judge whether it is useful. Judgement, or taste, isn’t just aesthetics. It’s about recognizing when something feels right. When the interaction flows naturally. When a concept clicks into place. It’s the ability to make good calls about what matters, what’s worth surfacing, and what should stay out of the way. It comes from thousands of hours working on your craft, paying attention to a plethora of details and how they fit together. If you don't hone your taste, it will atrophy.
In this plugin project, I kept returning to the same question: Does this interaction respect my judgment, or override it? That’s why I grew uncomfortable with the idea of the AI doing the initial conceptual extraction. Even if I could edit the result afterward, the fact that it started from a place I didn’t fully control made it harder to trust. It treated thinking as an afterthought instead as the primary focus.
If my tool is meant to help me think, then it needs to treat my judgment, not the model’s output, as the starting point. That’s not just a design preference. It’s a reflection of my belief that craftsmanship matters, especially in the age of AI.
Judgment can be honed. But only if you give it room to operate. Only if your tools make space for it, rather than relying fully on LLM.
Back to User-Centered Design In The Age of LLM
That shift in mindset - toward preserving judgment and structuring thinking - was what led me to rebuild the plugin once again, with a different design philosophy: instead of having the AI do the extraction, I do it myself.
After writing, I re-read my own text and manually identify actors, worldviews, facts, and outcomes. These selections become nodes in a conceptual graph. I can switch views, organize them visually, and draw edges between them to represent relationships. I can modify the edges and also change a concept from one type to another (i.e from actor to worldview).
This approach does something subtle but powerful: it forces me to decide what matters. It invites me to examine my own assumptions. It gives me just enough structure to scaffold my thinking, without taking the work away from me.
Sure, I can just read the text and mentally evaluate each segment to see if they are a fact or a worldview without building all of this. But this design provides me with some useful affordances:
- When I highlight a text segment, there's a floating menu that appears which allows me to turn that segment into a conceptual component easily. This aligns with how I deal with text all the time. I would re-read it, and use my cursor to select certain segments that capture my attention. There's a high chance that they capture my attention for good reasons. But without this affordance, I may forget to think about them and end up missing important details.
- When I have a set of conceptual components, I can connect them together in the graph view. This graph view helps add another perspective on top of highlighted segments: how do they connect together. I have a clear visual indication whenever concepts are disconnected. This visual representation nudges me towards connecting them, and in the process, clarifies my own thinking even more. How does this connect to that? I marked this as a worldview, but perhaps it's really a desired outcome? These kind of questions comes naturally as you work with the graph view alongside with the text editor.
How the current iteration of my tool for thoughts looks like. I can select a segment in my writing and turn it into a node on a graph. I can switch to the graph view and add connections between nodes and re-organize them spatially.
You may wonder, this would increase friction for me. But more accurately, it creates the good kind of frictions. This might sound a bit counter-intuitive for some product folks, because much of what is discussed when it comes to building products it about eliminating frictions. However, frictions can be good, when they represent tensions between ideas or concepts that you haven't really worked out yet. The presence of that friction tells you that you haven't really understood it yet. A frictionless learning experience isn't a learning experience, because learning requires frictions. Attempting to to resolve these frictions helps internalize knowledge, improve recall, and ultimately is the key if you want to think better. It's also an important consideration when designing a tool for thought, you don't want to replace thinking, but augment it, and enable new kind of thoughts,
Closing Thoughts: the Messy Middle
Will this new system stand the test of time? I don’t know yet. It'll take some applications within serious context of use to know more. However, it does feel like progress - I started with some vague, and gradually iterated towards the thing I was looking for all along. Not quite there yet, but it is progress.
The process of getting from vagueness to clarity is still pretty much the hardest part when building a product, even in the age of LLLM, or, especially in the age of LLM. Most writing about building products is either after a product gains traction, or speculates before an idea gets implemented. But most of us live in the space between: the foggy, frustrating middle where things sort of work, but not quite. This is what that middle feels like to me.
As Product Managers, we must adapt to the changing world. Now that LLM helps us build stuffs faster, we can and should spend more time in this messy middle figuring out how our thoughts and conceptions map to reality. I think you'd find that you're wrong much more often, and that clarity of thoughts is truly a rare feat.