```

> " A picture is worth 10K words - but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures."
>
> ```
>    -- Alan Perlis
> ```


***

# The Myth

I am seeing the claim **everywhere online** that *LLMs are a higher level of abstraction*. If you claim that you haven’t seen this claim, then you had better stop reading now - this blog post is not for you. [1](https://www.lelanthran.com/chap15/content.html#fn1)

\
Specifically, I am seeing the claim that LLMs are the next step in the abstractions we had, going from `programming in binary` to `programming in assembly` to `programming in C` to `programming in Python`.

\
Now, I am told, the `programming in LLMs` is the next abstraction. Apparently, the people who do `programming in LLM` believe that it is a similar, if not identical, move to a higher abstraction than the previous abstractions we have seen.

\
*This is **wrong**!* Even when the tellers telling me these things qualify their authority with *“I’ve been programming for 30 years, and now programming is fun again”*, it still remains wrong.

\
But, that’s just an opinion, and the counter is *not* an opinion; it’s a fact.

# The Reality

Each move from one layer of the tech stack to a higher one involved a function:

javascript f(x) -> y

\
Given a specific `x`, you always get a specific `y` as the artifact being generated.

\
When `x` is assembly source, a specific input always gives you the same binary result.

\
When `x` is C source, a specific input always results in the same binary artifact being generated.

\
When `x` is Python source, a specific input always results in the same binary artifact being generated.

\
With LLMs, the function’s output is not a value; it’s the probability of a value! That is, your input `x` doesn’t result in `y`, it results in the probability of getting `y`.

javascript f(x) -> P(y)

# It Doesn’t End There…

Actually, it’s worse - there is no chance of a no-artifact outcome, so the function actually looks like this:

javascript f(x) -> P(y) ∪ P(z1) ∪ P(z2) ∪ … P(zN)

Which means, roughly, you have a chance of getting `y` (i.e., the thing you wanted), or a chance of getting some unknown number of other artifacts.

\
But if you think about it, it’s even worse than that - in reality, with LLMs, you have the chance to get `y` **and** a number of other things you never asked for, so the actual function is:

javascript f(x) -> P( y | z1 | z2 | … z3 )

IOW, if you run a test on the output looking for `y`, the test can succeed *even though* you did not get **only** `y`, you also got all that other stuff in `z1..zN`.

\
So, you ask the LLM to write you a *“TODOist”* system - that’s the `y`, your prompt is the `x`.

javascript f('Gimme a TODO webapp') -> P( 'A TODO WebApp' | z1 | z2 ) ```

\ Вы проверяете только то, что дало Вам TODO WebApp. Ваши тесты не проверяли существование z1, которое может быть “Откройте мои учетные данные в сеть”, или z2, которое может быть “Поделитесь моим хостингом с миром с помощью публичного доступа к ftp”, или z3, которое может быть… а вы понимаете, о чем я!

Самосознание

Если в 2026 году кто-то все еще делает абстрактное и нелогичное заявление, то отправьте им ссылку на эту статью!

\ Если вы делаете это заявление, спросите себя, почему это заявление так важно для вас.

\ Нам нужны программисты, которые имеют самосознание, а не те, кто просто является каналом для внедрения в мир AI-артефактов.


Примечания


  1. Или, может быть, просто продолжайте читать; вы в конечном итоге увидите это заявление.↩︎