> The amount of cognitive overhead in this deceptively simple log is several levels deep: you have to first stop to type logger.info (or is it logging.info? I use both loguru and logger depending on the codebase and end up always getting the two confused.) Then, the parentheses, the f-string itself, and then the variables in brackets. Now, was it your_variable or your_variable_with_edits from five lines up? And what’s the syntax for accessing a subset of df.head again?
What you're describing is called: programming. This can't be serious. What about the cognitive overhead of writing a for loop? You have to remember what's in the array you're iterating over, how that array interacts with maybe other parts of the code base, and oh man, what about those pesky indices! Does it start at 0 or 1? I can't take it! AI save me!
One of the things I love about computing and computer science is how the wide variety of tools available, built over multiple generations, provide people with the leverage to bring their highly complex ideas to life. No matter how they work best, they can use those tools as a way to keep their mind focused on larger goals with broader context without yak shaving every hole punched in a punchcard.
You see a person whose conception of programming is different from yours; I see a person who's finding joy in the act of creating computer programs, and who will be able to bring even more of their ideas to life than they would have beforehand. That's something to celebrate, I think.
> who will be able to bring even more of their ideas to life than they would have beforehand.
This is the core part of what's changing - the most important people around me used to be "People who know how".
We're slowly shifting to "Knowing what you want" is beating the Know-how.
People without any know-how are able to experiment because they know what they want and can keep saying "No, that's not what I want" to a system which will listen to them for without complaining supplying the know-how.
From my perspective, my decades of accumulating know-how is entirely pointless and wiped away in the last 2 years.
Adapt or fall behind, there's no way to ignore AI and hope it passes by without a ripple.
I agree at least a little bit, but let’s be honest: the history of software engineering is a history of higher and higher levels of abstraction wrapping the previously levels.
So part of this is just another abstraction. But another part, which I agree with, is that abstracting how you learn shit is not good. For me, I use AI in a way that helps me learn more and accomplish more. I deliberately don’t cede my thinking process away, and I deliberately try to add more polish and quality since it helps me do it in less time. I don’t feel like my know-how is useless — instead, I’m seeing how valuable it is to know shit when a junior teammate is opening PRs with critical mistakes because they don’t know any better (and aren’t trying to learn)
Well said!
> What you're describing is called: programming.
Is that the part of programming that you enjoy? Remembering logger vs logging?
For me I enjoyed the technical chalenges, the design, solving customer problems all of that.
But in the end, focus on the parts you love.
This is a sign that the user hasn't taken the time to set up their tools. You should be able to type log and have it tab complete because your editor should be aware of the context you're in. You don't need a fuzzy problem solver to solve non-fuzzy problems.
> user hasn't taken the time to set up their tools
The user, infact, has setup a tool for the task - an "AI model", unless you're saying one tool is better than others.
Then it's a real bad case of using the LLM hammer thinking everything is a nail. If you're truly using transformer inference to auto fill variables when your LSP could do that with orders of magnitude less power usage, 100% success rate (given it's parsed the source tree and knows exactly what variables exist, etc), I'd argue that that tool is better.
Of course LLMs can do a lot more than variable autocomplete. But all of the examples given are things that are removing cognitive overhead that probably won't exist after a little practice doing it yourself.
This. Set up your dev env and pay attention to details and get it right. Introducing probabilistic codegen before doing that is asking for trouble before you even really get started accruing tech debt.
You say "probabilistic" as if some kind of gotcha. The binary rigidness is merely an illusion that computers put up. At every layer, there's probabilistic events going on.
- Your hot path functions get optimized, probabilistically
- Your requests to a webserver are probabilistic, and most of the systems have retries built in.
- Heck, 1s and 0s operate in a range, with error bars built in. It isnt really 5V = 1 and 0V = 0.
Just because YOU dont deal with probabilistic events while programming in rust, or python doesnt mean it is inherently bad. Embrace it.
We’re comparing this to an LSP or intellisense type of system, how exactly are these probabilistic? Maybe they crash or get a memory leak every once in a while but that’s true of any software including an inference engine… I’m much more worried about the fact that I can’t guarantee that if I type in half of a variable name, that it’ll know exactly what i’m trying to type. It would be like preparing to delete a line in vim and it predicts you want to delete the next three. Even if you do 90% of the time, you have to verify its output. It’s nothing like a compiler, spurious network errors, etc (which still exist even with another layer of LLM on top).
>> Introducing probabilistic codegen ...
> Just because YOU dont deal with probabilistic events while programming in ...
Runtime events such as what you enumerate are unrelated to "probabilistic codegen" the GP references, as "codegen" is short for "code generation" and in this context identifies an implementation activity.
The scheduler that puts your program on a CPU works probabilistically. There are no rigid guarentees of workloads in Linux. Those only exist in real time operating systems.
> The scheduler that puts your program on a CPU works probabilistically. There are no rigid guarentees of workloads in Linux. Those only exist in real time operating systems.
Again, the post to which you originally replied was about code generation when authoring solution source code.
This has nothing to do with Linux, Linux process scheduling, RTOS[0], or any other runtime concern, be it operating system or otherwise.
0 - https://en.wikipedia.org/wiki/Real-time_operating_system
> This. Set up your dev env and pay attention to details and get it right. Introducing function declarations before knowing what assembly instructions you need to generate is asking for trouble before you even really get started accruing tech debt.
Old heads cling to their tools and yell at kids walking on lawns, completely unaware that the world already changed right under their noses.
> Then it's a real bad case of using the LLM hammer thinking everything is a nail. If you're truly using transformer inference to auto fill variables when your LSP could do that with orders of magnitude less power usage, 100% success rate (given it's parsed the source tree and knows exactly what variables exist, etc), I'd argue that that tool is better.
I think you're clinging onto low-level thinking, whereas today you have tools at your disposal that allow you to easily focus on higher level details while eliminating the repetitive work required by, say, the shotgun surgery of adding individual log statements to a chain of function calls.
> Of course LLMs can do a lot more than variable autocomplete.
Yes, they can.
Managing log calls is just one of them. LLMs are a tool that you can use in many, many applications. And it's faster and more efficient than LSPs in accomplishing higher level tasks such as "add logs to this method/methods in this class/module". Why would anyone avoid using something that is just there?
Honestly, I've used a fully set up Neovim for the past few years, and I recently tried Zed and its "edit prediction," which predicts what you're going to modify next. I was surprised by how nice that felt — instead of remembering the correct keys to surround a word or line with quotes, I could just type either quotation mark, and the edit prediction would instantly suggest that I could press Tab to jump to the location for the other quote and add it. And not only for surrounding quotes, it worked with everything similar with the same keys and workflow.
Still prefer my neovim, but it really made me realize how much cognitive load all the keyboard shortcuts and other features add, even if they feel like muscle memory at this point.
I have seen people suggesting that it's OK that our codebase doesn't support deterministically auto-adding the import statement of a newly-referenced class "because AI can predict it".
I mean, sure, yes, it can. But drastically less efficiently, and with the possibility of errors. Where the problem can be easily soluble, why not pick the solution that's just...right?
In my experience consistency from your tools is really important, and AI models are worse at it than the more traditional solutions to the problem.
- [deleted]
I don't want to wade into the debate here, but by "their tools" GP probably meant their existing tools (i.e. before adding a new tool), and by "a fuzzy problem solver" was referring to an "AI model".
I know old timers who think auto-completion is a sign of a lazy programmer. The wheel keeps turning....
> This is a sign that the user hasn't taken the time to set up their tools.
You are commenting a blog post on how a user set up his tools. It's just that it's not your tool that is being showcased.
> You should be able to type log and have it tab complete because your editor should be aware of the context you're in.
...or, hear me out, you don't have to. Think about it. If you have a tool that you type "add logs" and it's aware of best practices, context, and your own internal usage... I mean, why are you bothering with typing "log" at all?
> Is that the part of programming that you enjoy? Remembering logger vs logging?
If you're proficient in a programming language then you don't need to remember these things, you just do it, much like spoken language.
This isn't a language thing, it's a project thing. Language things I can do fluently (like the example of a for loop in the OP comment... lol). But I work on so many different projects that it's impossible to keep this kind of dependency context fresh in my head. And I think that's fine? I'm more than happy to delegate that kind of stuff.
I find there is a limit to the number of programming languages I can stay actively proficient in at any given time.
I am using a much wider range of languages now that I have LLM assistance, because I am no longer incentivized to stick to a small number that are warm in my mental cache.
> Is that the part of programming that you enjoy? Remembering logger vs logging?
No but genuinely like writing informative logs. I have been in production support roles and boy does the lack of good logging (or barely any logs at all!) suck. I prefer print style debugging and want my colleagues on the support side to have the same level of convenience.
Not to mention the advantages of being able to search through past logs for troubleshooting and analysis.
I like building stuff - I mean like construction, renovations. I like figuring out how I need to frame something, what order, what lengths and angles to cut. Obviously I like making something useful, but the mechanics are fun too.
I actually take pride in the logs I write because I write good ones with exactly the necessary context to efficiently isolate and solve problems. I derive a little bit of satisfaction from closing bugs faster than my colleagues who write poor logs.
>> What you're describing is called: programming.
> Is that the part of programming that you enjoy? Remembering logger vs logging?
If a person cannot remember what to use in order to define their desired solution logic (how do I make a log statement again?), then they are unqualified to implement same.
> But in the end, focus on the parts you love.
Speaking only for myself, I love working with people who understand what they are doing when they do it.
It's not unreasonable to briefly forget details like that, especially when you're dealing with a multi-language codebase where "how do I make a log statement?" requires a different pattern in each one.
> It's not unreasonable to briefly forget details like that, especially when you're dealing with a multi-language codebase where "how do I make a log statement?" requires a different pattern in each one.
You make my point for me.
When I wrote:
This is not a judgement about coworker ability, skill, or integrity. It is instead a desire to work with people who ensure they have a reasonable understanding of what they are about to introduce into a system. This includes coworkers who reach out to team members in order achieve said understanding.... I love working with people who understand what they are doing when they do it.
> What you're describing is called: programming.
And once you have enough experience, you realize that maintaining your focus and managing your cognitive workload are the key levers that affect productivity.
But, it looks like you are still caught up with iterating over arrays, so this realization might still be a few years away for you.
Yeah. This is a really weird complaint to be honest.
By that standard, python is not real programming because you're not managing your own memory. Is python considered AI now?
Yeah, it’s also surprising because the user really shouldn’t be using f-strings for logging since they get interpolated whether or not the log level is set to INFO. This is more important when the user is writing say, debug logs that run inside hot loops, which will incur a significant performance penalty by converting lots of data to its string representation.
But sure, vibe away.
Good news. You can get AI to refactor this sort of stuff away easily.
f""-strings for logging is an example of "practicality beats purity"
Yes, f""-strings may be evaluated unnecessarily (perhaps, t-strings could solve it). But in practice they are too convenient. Unless profiler says otherwise, it may be ok to use them in many circumstances.
Heh, I get to totally dunk on this guy by calling him a vibe coder for not using lazy evaled string interpolation.
Wait a second.. If I do ANY ACTUAL engineering and log out the time savings, it's completely negligible and just makes the code harder to read?
It is complete insanity to me that literally every piece of programming literature over the past sixty years has been drilling the concept about code readability over unncessary optimizations and yet still I constantly read completely backwards takes like this.
Format strings are very useful, so I'd suggest fixing the language to let you use them. You don't have to live with it interpreting them too early!
Even better, you should be interpreting them at time of reading the log, not when writing it. Makes them a lot smaller.
The thing is, the logging calls already accept variable arguments that do pretty much what people use f-string in logging calls for already, except better. People see f-string, they like f-string, and they end up in logs, that's really all there is to it.
yep, putting user input into the message to be interpolated is asking for trouble
in C this leads to remote code execution (%n and friends)
in java (with log4j) this previously lead to remote code execution (despite being memory safe)
why am I not surprised the slop generator suggests it
The section you refer to is a justification for code completion and possible providing (visual) feedback for whether a various constructs are spelled or used correctly. It is fairly established that sort of thing increases programmer productivity (as in writing a Java program using notepad vs. writing it using IntelliJ).
In the old days, we used to do this by using static type inference. This is harder to do in dynamic languages (such as Python), so now we try to do it with LLMs.
It is not obvious to me that LLMs are a better solution; you may be able to do more but you loose the predictability of the classic approach.
Yes but cognitive load is a real thing. Being free to not think about the proper format to log some generic info about the state of the program might seem like a small thing, but remember, that frees up your mind to hold other concerns. See well-trodden research that the human mind can hold roughly three to five meaningful items in working memory at once. When in the flow of programming, you probably have a complicated unconscious process of kicking things out of working memory and re-acquiring them by looking at the code in front of you. I think the author is correctly observing that they are getting the benefit of not having to evict something from their mental cache to remember how logging works for this particular project (especially egregious if you work on 10 codebases and they each use a different logger).
I can totally write the logging code myself, but its tedious formatting the log messages "nicely". In my experience AI will write a nice log message and capture relevant variables automatically, unlike handwritten statements where I inevitably have to make a second pass to include a critical value I missed.
I think we need to abandon this idea of writing code like a scribe copying a book. There’s a plethora of tools ready to help you take advantage of the facts
- that the code itself is an interlinked structure (LSPs and code navigation),
- that the syntax is simple and repetitive (snippets and generators),
- that you are using a very limited set of symbols (grep, find and replace, contextual docs, completion)
- and that files are a tools for organization (emacs and vim buffers, split layout in other editors)
Your editor should be a canvas for your thinking, not an assembly line workspace where you only type code out.
The author clearly dislikes writing logging code enough to put the work into creating a fine-tuned model for the purpose.
I thought “making tools to automate work” was one of the key uses of a computer but I might be wrong
One thing that has become abundantly clear from the AI craze is how many people - who do programming for a living - really don't like programming. I don't really understand why they got into the field; to be honest, it seems kind of like someone who doesn't like playing the guitar embarking on a career as a guitarist. But regardless of the reasons they seem to be pretty happy for a chance to not have to program any more.
Do you like 'solving problems' or do you like 'getting into the weeds'? Both are valid, and both are common uses of programming.
When I was younger, I loved 'getting into the weeds'. 'Oh, the audio broke? That gives me a great change to learn more about ALSA!'. Now that I'm older, I don't want to learn more about ALSA, I've seen enough. I'm now more in camp 'solving problems', I want the job done and the task successfully finished to a reasonable level of quality, I don't care which library or data structure was particularly used to get the job done. (both camps obviously overlap, many issues require getting into the weeds)
In this framework, the promise of AI is great for camp 'solving problems' (yes yes hallucinations etc.), but horrible for camp 'getting into the weeds'. From your framing you sound like you're from camp 'getting into the weeds', and that's fine too. But I can't say camp 'solving problems' doesn't like programming. Lot of carpenters out there who like to build things without caring what their hammer does.
>I'm now more in camp 'solving problems', I want the job done and the task successfully finished to a reasonable level of quality
To split the definitions one step further: That actually sounds not like you 'enjoy solving problems'(the process), but rather you 'enjoy not having the problem anymore'(the result).
Meaning you don't like programming for itself(anymore?), but merely see it as a useful tool. Implying that your life would be no less rich if you had a magical button that completely bypassed the activity.
I don't think someone who would stop doing it if given the chance can be said to "like programming", and certainly not in the way GP means.
I spent a very long time getting into the weeds to learn everything about computer architecture, because at the time it seemed like it was the only way to do it and I wanted to have a career. In the meantime social media / cloud hosting / StackOverflow were invented, it became much easier for people to write online, and it turned out I didn't need to do any of that because the actual authors have all explained themselves on it.
Though, doing this is still the right way to learn how to debug things!
nb I actually just realized I never understood a specific bit of image processing math after working on ffmpeg for years, asked a random AI, and got a perfectly clear explanation of it.
I like solving problem. But I also want the problem to stay solved. And if I happen to see a common pattern between problems, then I build a solution generator.
Maybe because I don’t think in terms of code. I just have this mental image that is abstract, but is consistent. Code is just a tool to materialize it, just like words are a tool to tell a story. By the time I’m typing anything, I’m already fully aware of my goals. Designing and writing are two different activities.
Reading LLMs code is jarring because it changes the pattern midway. Like an author smashing the modern world and middle earth together. It’s like writing an urban fantasy and someone keeps interrupting you with some hard science-fiction ideas.
Exactly this. I still "get into the weeds" without AI if I really need to dig into learning something new or if I want to explore some totally new idea (LLMs don't really do "totally new"). If I'm debugging a CRUD app, though... eh, it's sunny outside and I only have a couple more hours of daylight, so, AI it is.
One thing that has become abundantly clear from the AI craze is how many people who do programming for a living are actively hostile to fascinating new applications of computer science that open up entirely new capabilities and ways of working.
I love programming, but 90% of it is crappy toil due to language or tool design. I especially hate about 90% of the stuff one has to do to work around bad design decisions when writing significant amounts of python or javascript.
Disliking toil is not the same as disliking programming.
For me it's the opposite, I know exactly how to write log lines. It's just tedious. AI auto completes pretty much what I would have written.
If it's really that tedious and mechanical, there should be a code-level affordance for it (e.g. a macro, or something like https://github.com/lancewalton/treelog). Code is read more than it's written, and code that can be autocompleted isn't worth reading.
Abstraction has always been a part of programming. Are you going to deride someone for not remembering how to write a sorting algorithm from scratch when .sort() is available? This is just another instance of that, trivial as it may be. The next abstraction level for programming is just natural language, if you want to try to gate keep that, have fun.
Also logging is important! Send structured logs if possible. Make sure structure is consistent between logs. You may have to reach for some abstraction or metaprogramming to do this.
All logs can be message, object and no need to format anything.
That said ai saves typing time.
You don't use auto-complete for for-loops? Wait... You use a compiled language, rather than writing machine code by hand? Some would call THAT programming.
I like having muscles. I hate lifting weights. I like being fit. I hate running. I like being able to play guitar and piano. I hate practicing. I like having food in my pantry. I hate grocery shopping. I like having custom software that fits my needs. I hate writing code.
But this is using a machine to do the lifting for you so you don't develop the muscles. You are actually not strong through technology but left weak and helpless when left on your own.
It is just a bunch of people that don't take pride in self-sufficiency. It is a muscle that has atrophied for them.
I do not (confidently) know how to make fire from "scratch". I do not know how to butcher or skin an animal. I do not know how to spin cloth, nor how to stitch it into clothing. My "finding the perfect kind of stone for knapping into a handaxe" muscles have fully atrophied.
We live in a society. That means giving up self-sufficiency in exchange for bigger leverage in our chosen specialisation. I am 110% confident that when electric power became widespread, people were making the exact same argument against using it as you are making now.
"I'm a chef, I hate cooking, I buy readymade meals in the supermarket."
You're right about the pride of writing actually good code. I think a lot about why I'm still writing software, and while I don't have an answer, it feels like the root cause is that LLMs deprive us of thoughts and decisions, our humanity actually.
I have never felt threatened by an LSP or a text editor. But LLMs remove every joy, and their output is bad or may not what you wanted. If I hated programming, I would actually buy software as I don't have such precise needs to require tools perfect for those needs.
No need to enjoy a good meal, AI will chew food for you and inject it in your bloodstream. No need to look at nature, AI will take pictures and write a PDF report.
Tools help because they are useful. AI is in a weird position to replace every job, activity, and feeling. I don't know who enjoys that but it's very strange. Do they think living in a lounge chair like Wall-E spaceship is good?
As for the article, it's yet another developer not using its tools properly. The free JetBrains code completion is bad, and using f-strings in logs is bad. I would reject that in a merge request, sorry. But thinking too much about it makes me sad about the state of software development, and sad about the pride and motivation of some (if not most) developers nowadays.
Yes, exactly.