How Large Language Models (LLMs) Handle Context Windows: The Memory That Isn't Memory

When you have a long conversation with a large language model (LLM) such as ChatGPT or Claude , it feels like the model remembers everything you’ve discussed. It references earlier points, maintains consistent context, and seems to “know” what you talked about pages ago.

But here’s the uncomfortable truth: the model doesn’t remember anything. It’s not storing your conversation in memory the way a database would. Instead, it’s rereading the entire conversation from the beginning every single time you send a message.


“A context window isn’t memory. It’s a performance where the model rereads its lines before every response.”


Read more →

Rethinking the Three-Second Traffic Rule: When Physics Says It’s Not Enough

While researching why car insurance rates are so extremely high in Las Vegas, I started thinking about the three-second rule and its validity. As I’ve always heard, the three-second rule refers to how far you should be behind a car in traffic. The idea is that you pick out a fixed roadside marker and you are supposed to pass that marker at least three seconds after the car in front of you. That rule is simple enough, yet deceptively deep once you unpack the physics.


“Three seconds is a rule of thumb. Physics reveals the truth.”


Read more →

Modeling Heat Capacity and Evaporation with Python: Why Water Warms Slowly but Cools Fast

Every summer, it feels like a small miracle when the pool finally warms up enough to swim. In Nevada, where the air temperature can sit above 100°F (38°C) for weeks, you’d expect the water to keep pace. Yet, somehow, it takes forever to warm, and only a few cool nights can undo all that progress.

The same phenomenon shows up in a stick of butter. Butter melts quickly, while margarine stays stubbornly firm even under the same heat. That’s not coincidence; it’s thermodynamics.

The butter versus margarine comparison is a staple example in nutrition science. It shows how the proportions of fat, water, and solids affect how much energy it takes to change temperature. Butter, with more fat and less water, heats up and melts quickly. Margarine, full of water and unsaturated oils, absorbs more energy before softening because water’s specific heat is much higher.


“A pool in the desert and a stick of margarine in the kitchen both tell the same story: water resists change.”


Read more →

How Large Language Models (LLMs) Learn: Calculus and the Search for Understanding

When you interact with a large language model (LLM) such as ChatGPT or Claude , the model seems to respond instantly relative to the question’s degree of difficulty. What’s easy to forget is that every word it predicts comes from a long history of learning where billions of gradient steps have slowly sculpted its understanding of language.

Large language models don’t memorize text. They optimize it. Behind that optimization lies calculus. I’m not referring to the calculus you did with pencil and paper. I’m talking about a sprawling, automated version that computes millions of derivatives per second.

At its heart, every LLM is a feedback system. It starts with random guesses, measures how wrong it was, and then adjusts itself to be slightly less wrong. The word “slightly” in this context is the essence of calculus.


“Each gradient step represents a measurable reduction in error, guiding the model toward a more stable understanding of language.”


Read more →

How Large Language Models (LLMs) Think: Turning Meaning into Math

When you enter a sentence into a Large Language Model (LLM) such as ChatGPT or Claude , the model does not process words as language. It represents them as numbers.

Each word, phrase, and code token becomes a vector — a list of real-valued coordinates within a high-dimensional space. Relationships between meanings are captured not by grammar or logic but by geometry. The closer two vectors lie, the more similar their semantic roles appear to the model.

This is the mathematical foundation of large language models: linear algebra. Matrix multiplication, vector projection, cosine similarity, and normalization define how the model navigates this vast space of meaning. What feels like understanding is actually the alignment of high-dimensional vectors governed by probability and geometry.


“Linear algebra and geometry do more than support AI; they create its language of meaning.”


Read more →