The L in "LLM" Stands for Lying

acko.net

100 points

LorenDB

5 hours ago


43 comments

simianwords 34 minutes ago

What the author and many others find hard to digest is that LLMs are surfacing the reality that most of our work is a small bit of novelty against boiler plate redundant code.

Most of what we do is programming is some small novel idea at high level and repeatable boilerplate at low level. A fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions? LLMs are especially good at fuzzy abstracting repeatable code, and it’s simply not possible to get the same result from other manual methods.

I empathise because it is distressing to realise that most of value we provide is not in those lines of code but in that small innovation at the higher layer. No developer wants to hear that, they would like to think each lexicon is a creation from their soul.

  • eucyclos 6 minutes ago

    I wrote a book a while back where I argued that coding involves choosing what to work on, writing it, and then debugging it, and that we tend to master these steps in reverse chronological order.

    It's weird to look at something that recent and think how dated it reads today. I also wrote about the Turing test as some major milestone of AI development, when in fact the general response to programs passing the Turing test was to shrug and minimize it

  • silon42 25 minutes ago

    Abstraction isn't free... even if you had the correct abstraction and the tools to remove the parts you don't need for deployment, there is still the cost of understanding and compiling.

    There is also the cost reason, somebody trying to sell an abstraction will try to monetize it and this means not everyone will want/be able to use it (or it will take forever/be unfinished if it's open/free).

    There's also the platform lockin/competition aspect...

  • teaearlgraycold 30 minutes ago

    Time to learn design, how to talk to customers, and how to discover unsolved problems. Used right LLMs should improve your software quality. Make stuff that matters that you can be proud of.

plasticeagle 36 minutes ago

Acko.net remains the best website on the internet.

emsign an hour ago

> It's not a co-pilot, it's just on auto-pilot.

Love it. Calling it "Copilot" in itself is a lie. Marketing speak to sell you an idea that doesn't exist. The idea is that you are still in control.

  • _flux an hour ago

    Well initially it was a lot less capable. Someone might describe it auto-complete on steroids.

    Someone might call LLMs that today, except they've stepped a bit up from steroids.

    • emsign 34 minutes ago

      Then MS is conveniently keeping the old name.

raincole 25 minutes ago

> Video games stand out as one market where consumers have pushed back effectively

No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.

If you actually read the words used in Steam AI survey you'll know Steam has completely caved in for AI-gen code as well. It's specifically worded like this:

> content such as artwork, sound, narrative, localization, etc.

No 'code' or 'programming.'

If game players are the most anti-AI group then it's crystal clear that LLM coding is inevitable.

  • theshrike79 6 minutes ago

    Also "AI" has been in gaming, especially mobile gaming, for a literal decade already.

    Household name game studios have had custom AI art asset tooling for a long time that can create art quickly, using their specific style.

    AI is a tool and as Steve Jobs said, you can hold it wrong. It's like plastic surgery, you only notice the bad ones and object to them. An expert might detect the better jobs, but the regular folk don't know and for the most part don't care unless someone else tells them to care.

    And then they go around blaming EVERYTHING as AI.

  • trashymctrash 7 minutes ago

    If you read the next couple of paragraphs, the author addresses this:

    > That said, Steam's policy has been recently updated to exclude dev tools used for "efficiency gains", but which are not used to generate content presented to players.

    I only quoted the first paragraph, but there is more.

DavidPiper 39 minutes ago

> This stands in stark contrast to code, which generally doesn't suffer from re-use at all ...

This is an absolute chef-kiss double-entendre.

nurettin 3 minutes ago

Question is: Which L? Or How many Ls?

einr 2 hours ago

This rules. What a good, sensible, sober post.

Copenjin an hour ago

I instantly remembered the page header, I probably visited this site last time 10 years ago or something.

est 38 minutes ago

I won't call that forging, but commission.

btw you can make git commits with AI as author and you as commiter. Which makes git blame easier

vladms 29 minutes ago

> Whether something is a forgery is innate in the object and the methods used to produce it. It doesn't matter if nobody else ever sees the forged painting, or if it only hangs in a private home. It's a forgery because it's not authentic.

On a philosophical level I do not get the discussions about paintings. I love a painting for what it is not for being the first or the only one. An artist that paints something that I can't distinguish from a Van Gogh is a very skillful artist and the painting is very beautiful. Me labeling "authentic" it or not should not affect it's artistic value.

For a piece of code you might care about many things: correctness, maintainability, efficiency, etc. I don't care if someone wrote bad (or good) code by hand or uses LLM, it is still bad (or good code). Someone has to take the decision if the code fits the requirements, LLM, or software developer, and this will not go away.

> but also a specific geographic origin. There's a good reason for this.

Yes, but the "good reason" is more probably the desire of people to have monopolies and not change. Same as with the paintings, if the cheese is 99% the same I don't care if it was made in a region or not. Of course the region is happy because means more revenue for them, but not sure it is good.

> To stop the machines from lying, they have to cite their sources properly.

I would be curious how can this be applied to a human? Should we also cite all the courses, articles that we have read on a topic when we write code?

  • xg15 26 minutes ago

    > An artist that paints something that I can't distinguish from a Van Gogh is a very skillful artist and the painting is very beautiful.

    There are a lot such artists who can do that after having seen Van Gogh's paintings before. Only Van Gogh (as far as we know) did paint those without having seen anything like it before - in other words, he had a new idea.

    • wonnage 2 minutes ago

      Even the mechanical skill of painting gets a lot harder without an example to look at. Most people can get pretty good at painting from example within a year or two but it’s a big leap to simply paint from memory, much less create something original.

theshrike79 2 hours ago

> This sort of protectionism is also seen in e.g. controlled-appelation foods like artisanal cheese or cured ham. These require not just traditional manufacturing methods and high-quality ingredients from farm to table, but also a specific geographic origin.

Maybe "Artisanal Coding" will be a thing in the future?

  • boxed 42 minutes ago

    This geographic protection is extremely bogus in many cases, if not most cases, which imo undermines his argument.

anilgulecha 44 minutes ago

>If you ask me, no court should have ever rendered a judgement on whether AI output as a category is legal or copyrightable, because none of it is sourced. The judgement simply cannot be made, and AI output should be treated like a forgery unless and until proven otherwise.

Guilty until proven innocent will satisfy the author's LLM-specific point of contention, but it is hardly a good principle.

  • emsign 35 minutes ago

    You are missing the point of the author. He literally said no court should have rendered a judgement, that's the exact opposite of guilty until proven innocent. Guilty means a court has made a judgement.

    He is proposing to not make a judgement at all. If the AI company CLAIMS something they have to prove it. Like they do in science or something. Any claim is treated as such, a claim. The trick is to not even claim anything, let the users all on their own come to the conclusion that it's magic. And it's true that LLMs by design cannot cite sources. Thus they cannot by design tell you if they made something up with disregard to it making sense or working, if they just copy and pasted it, something that either works or is crap, or if they somehow created something new that is fantastic.

    All we ever see are the success stories. The success after the n-th try and tweaking of the prompt and the process of handling your agents the right way. The hidden cost is out there, barely hidden.

    This ambiguity is benefitting the AI companies and they are exploiting it to the maximum. Going even as far as illegally obtaining pirated intellectual property from an entity that is banned in many countries on one end of their utilization pipeline and selling it as the biggest thing ever at the other end. And yes, all the doomsday stories of AI taking over the world are part of the marketing hype.

kombookcha 2 hours ago

What a wonderful read.

5o1ecist an hour ago

A pointless opinion-piece of low information density, perfect for an echo chamber of equally minded people.

  • azizam 10 minutes ago

    Sounds a lot like this entire website!

GuestFAUniverse 2 hours ago

And "lazy".

Claude makes me mad: even when I ask for small code snippets to be improved, it increasingly starts to comment "what I could improve" in the code I stead of generating the embarrassingly easy code with the improvement itself.

If I point it to that by something like "include that yourself", it does a decent job.

That's so _L_azy.

  • gck1 14 minutes ago

    Enforce this with deterministic guardrails. Use strictest linting config you possibly can, and even have it write custom, domain specific linters of things that can't happen. Then you won't have to hand hold it that much

  • emsign an hour ago

    LLMs are cheaters because their goal isn't to produce good code but to please the human.

    • js8 38 minutes ago

      That's a problem with any self-improving tools, not just LLMs. Successful self-improvement leads to efficiency, which is just another name for laziness.

baq 2 hours ago

Lying implies knowing what’s true

  • hsbauauvhabzb 2 hours ago

    Oh sorry my mistake! you’re right I don’t know what’s true.

feverzsj 2 hours ago

More like Lunatic.

  • Mordisquitos 2 hours ago

    In can be both. There are two L's to pick from.

wilg an hour ago

LLMs are pretty cool technology and are useful for programming.

  • emsign an hour ago

    If you check the code afterwards. You do check the code yourself, don't you?

    • malka1986 24 minutes ago

      Hello, I am a single dev using an agent (Claude Code) on a solo project.

      I have accepted that reading 100% of the generated code is not possible.

      I am attempting to find methods to allow for clean code to be generated none the less.

      I am using extremely strict DDD architecture. Yes it is totally overkill for a one man project.

      Now i only have to be intimate with 2 parts of the code:

      * the public facade of the modules, which also happens to be the place where authorization is checked.

      * the orchestrators, where multiple modules are tied together.

      If the inners of the module are a little sloppy (code duplication and al), it is not really an issue, as these do not have an effect at a distance with the rest of the code.

      I have to be on the lookout though. It happens that the agent tries to break the boundaries between the modules, cheating its way with stuff like direct SQL queries.

    • wilg 26 minutes ago

      eyeroll

Meneth an hour ago

That's a lie.