Corewood Logo

LLMs as Context Synthesizers: Why Direct Instructions Don't Work

4 min read

When I prompt an LLM, I do not attempt to say, "Do this." Direct instructions work only for basic, well understood, low-value tasks. No, in order to get LLMs to work, I must massage the context, putting the pieces of various ideas together. My prompts act as information flows, while the machine acts as a "context synthesizer." This means I keep tight eyes on the outputs. When changes are rejected, that "context," that "way of thinking," lingers on in the transformer model.

But, when I frequently interrupt the synthesis and guide it in a more productive direction, I shape the context. Like water through sand, I form the path I want the context to follow.

I understand that some people find this a bit high-maintenance. But LLM assisted work feels like technical writing to the fullest extent, to me.

The Art of Technical Communication

When thinking about technical writing, you must put yourself in the shoes of someone who does not know what you know. This is immensely difficult. Like imagining being taller or shorter, you can't really think how the non-expert thinks!

LLMs are knowledgeable non-experts, thus to yield their strengths, you must compensate for their lack.

Many people think you do not need to understand an underlying technology to yield it. They proceed to use automobiles as an example, where you don't have to be a mechanic to drive the car. But you DO still have to know the layout of the steering wheel, the pedals, the keys and the behavior of gravity, wheels, brakes...

The Pattern Recognition Paradox

LLMs do not "think," but to say they "just recognize patterns" doesn't seem right either. They do bring in context that will surprise you -- often to poor results.

LLMs have all the "how" and none of the "why." Trying to use a how-machine to answer a why-question will forever be futile.

Understanding the Mathematical Foundation

Let's keep diving in here, shall we? So, how do these LLM things really work? You can look up tokens and embeddings; I don't feel like going into that. I want to talk about the higher-level how, the idea behind the idea.

Words, it turns out, have innate mathematical relationships to one another. Similarly to how you can sample a rock and discover its relative place in geological history, we have learned to sample words, sentences, phrases, and tease out the relationships in the structure.

This makes sense -- language has a structure, but we have lacked the mathematical approach to really understand its interrelatedness until now.

And here we are! We have trained computers to understand the mathematical relationships between words and the internet. That is INSANE when you think about it.

The Geometry of Meaning

So, these mathematical relationships between words, they can be teased out via numerical relationships. I would try to draw you a graph, but after about 3 dimensions of perspectives, we tend to stop understanding it! This is because "meaning space" works across many more "dimensions" (as in axes on a graph, not as in freaky places with superpowers) than our eyeballs!

So, the computer runs through all the relations between the mathematical representations of your inputs, then outputs the related / adjacent "meaning space."

The implications astound. Philosophers like Plato and Kant would have had a FIELD DAY with the idea that language meaning could be mathematically interpreted, and even synthesized, by a machine.

The Context Imperative

But it also means, if you want to use the thing seriously, be prepared to provide a lot of context.

This is the fundamental insight that separates effective LLM users from frustrated ones. The machine doesn't understand your intent through brief commands. It synthesizes meaning from the rich contextual information you provide. The more carefully you craft that context, the more precisely the AI can navigate the vast "meaning space" to find outputs that align with your actual needs.

Working with LLMs requires a shift in mindset from commanding to conducting, from instructing to informing. When you understand this distinction, these powerful tools become genuinely useful partners in complex cognitive work.


Corewood: AI Solutions Built for Modern GRC Needs

Contact us today to discuss how we can help you implement AI solutions that understand context and deliver real business value.

Environmental Data Platform Landscape Analytics