This article is spot on.
I had stumbled upon Kidlin’s Law—“If you can write down the problem clearly, you’re halfway to solving it”.
This is a powerful guiding principle in today’s AI-driven world. As natural language becomes our primary interface with technology, clearly articulating challenges not only enhances our communication but also maximizes the potential of AI.
The async approach to coding has been most fascinating, too.
I will add, I've been using Repl.it *a lot*, and it takes everything to another level. Getting to focus on problem solving, and less futzing with hosting (granted it is easy in the early journey of a product) - is an absolute game changer. Sparking joy.
I personally use the analogy of mario kart mushroom or star; that's how I feel using these tools. It's funny though, because when it goes off the rails, it really goes off the rails lol. It's also sometimes necessary to intercept decisions it will take.. babysitting can take a toll (because of the speed of execution). Having to deal with 1 stack was something.. now we're dealing with potential infinite stacks.
Because I can never focus on just one thing, I have a philosophy degree. I’ve worked with product teams and spent lots of time with stakeholders. I’ve written tons of docs because I was the only one on the team who enjoyed it.
I’ve always bemoaned my distractibility as an impediment to deep expertise, but at least it taught me to write well, for all kinds of audiences.
Boy do I feel lucky now.
I have a philosophy degree, have worked in product teams, and have had very similar observations. I could've written this comment!
The challenge is that clearly stating things is and always has been the hard part. It’s awesome that we have tools which can translate clear natural language instructions into code but even if we get AGI you’ll still have to do that. Maybe you can save some time in the process by not having to fight with code as much but you’re still going to have to create really clear specs which, again, is the hard part.
Anecdote
Many years ago, in another millennium, before I even went to university but still was an apprentice (the German system, in a large factory), I wrote my first professional software, in assembler. I got stuck on a hard part. Fortunately there was another quite intelligent apprentice colleague with me (now a hard-science Ph.D.), and I delegated that task to him.
He still needed an explanation since he didn't have any of my context, so I bit the bullet and explained the task to him as well as I could. When I was done I noticed that I had just created exactly the algorithm that I needed. I just wrote it down easily myself in less than half an hour after that.
in my experience only a limited part of software can be done with just really clear specs, also at times in my career I have worked on things that became more "clear" what was really needed as time went on the more we worked on it, and in those cases really clear specs would have produced worse outcomes.
Which is the real reason agile is so much more effective than waterfall. The beginning of the project is when you know least about your project, so naturally you should be able to evolve the specification.
You are confusing waterfall with BDUF.
hmm right, in some ways could argue that AI based development is going against Agile development practices.
Maybe it is that LLM coding makes it easier to loop back with little regard for development cost. When you can spend an hour to fix what would have been hampered severely by technical debt late in the process - are we starting to omit optimizing for proper SDLC?
Generally I find that agile works because getting a really clear spec is so hard. You’re slowly iterating towards a clear spec. What is a finished piece of software if not a completed spec?
100% agree AI based dev is at odds with agile. You’re basically going to use the AI to fully rewrite the software over and over until the spec becomes clear which just isn’t very efficient. Plus it doesn’t help that natural language cannot be as clear a spec as code.
>The challenge is that clearly stating things is and always has been the hard part.
I state things crystal clear in real life on the internets. Seems like most of the time, nobody has any idea what I'm saying. My direct reports too.
Anyway, my point is, if human confusion and lack of clarity is the training set for these things, what do you expect
Excellent. That’s what we should be doing, with or without AI. It’s hard, but it’s critical.
I think about this a lot. Early on, as a self taught engineer, I spent a lot of time simply learning the vernacular of the software engineering world so that I could explain what it was that I wanted to do.
Repl.it is so hit or miss for me, and that's that is so frustrating. Like, it can knock out something in minutes that would have taken me an afternoon. That's amazing.
Then other times, I go to create something that is suggested _by them below the prompt box_ and it can't do it properly.
The fact that you think it was suggested _by_ them is I think where your mental model is misleading you.
LLMs can be thought of metaphorically as a process of decompression, if you can give it a compressed form for your scenario 1 it'll go great - you're actually doing a lot of mental work to arrive at that 'compressed' request, checking technical feasibility, thinking about interactions, hinting at solutions.
If you feed it back it's own suggestion it's no so guaranteed to work.
I don't think that the suggestions in the prompt box are being automatically generated on the fly for everyone. At least I don't see why they would be. Why not just have some engineers come up with 100 prompts, test them to make sure they work, and then hard-code those?
I would hope the suggestions in the prompt box are not being automatically generated by everyone else's inputs, I know what matters most is not the idea but execution but in the off hand you do have a really great and somewhat unique idea you probably wouldn't want it to be sent out to everyone who likes to take great ideas and implement it while you yourself are working on it.
Why do that when you can be lazy and get ‘AI’ to do the work.
You're misunderstanding me. Underneath the prompt box on the main page are suggestions of types of apps you can build. These are, presumably, chosen by people at the company. I'm not talking about things suggested within the chat.
I've found LLMs to be a key tool in helping me articulate something clearly. I write down a few half-vague notes, maybe some hard rules, and my overall intent and ask it to articulate a spec, and then ask to for suggestions, feedback, questions to clarify from a variety of perspectives. This gives me enough material to clarify my actual requirements and then ask for that be broken down into a task list. All along the way I'm both refining my mental model and written material to more clearly communicate my intent to both machines and humans.
Increasingly I've also just ben YOLOing single shot throw-away systems to explore the design space - it is easier to refine the ideas with partially working systems than just abstract prose.
- [deleted]