7 "AI" Posts

How Large Language Models (LLMs) Read Code: Seeing Patterns Instead of Logic

Developers are accustomed to thinking about code in terms of syntax and semantics, the how and the why. Syntax defines what is legal; semantics defines what it means. A compiler enforces syntax with ruthless precision and interprets semantics through symbol tables and execution logic. But a Large Language Model (LLM), reads code the way a seasoned engineer reads poetry, recognizing rhythm, pattern, and context more than explicit rules.


“When an AI system ‘understands’ code, it is not executing logic; it is modeling probability.


Read more →

From Solow to ChatGPT: Why Total Factor Productivity Can't Keep Up With Generative AI

If ChatGPT can write code, summarize legal briefs, and help draft business strategies in seconds, why doesn’t that show up in our productivity statistics?

Economists have long relied on a metric called Total Factor Productivity (TFP) to measure technological progress. But in an era of free digital tools and generative AI, TFP looks more like a rearview mirror than a windshield. It tells us a lot about the past, but almost nothing about where the economy is headed.


You can see the computer age generative AI everywhere but in the productivity statistics.

(Adapted from Robert Solow, 1987)


Read more →