Agents Are Not Enough

arxiv.org

199 points

awaxman11

5 days ago


157 comments

tonetegeatinst 2 days ago

Somewhat related but here's my take on super intelligence or AGI. I have worked with CNN,GNN and other old school AI methods, but don't have the resources to build a real SOT LLM, but I do use and tinker with LLM's occasionally.

If AGI or SI(super intelligence)/is possible, and that is an if...I don't think LLM's are going to be this silver bullet solution Just as we have in the real world of people who are dedicated to a single task in their field like a lawyer or construction workers or doctors and brain surgeons, I see the current best path forward as being a "mixture of experts". We know LLM's are pretty good for what iv seen some refer to as NLP problems, where the model input is the tokenized string input. However I would argue an LLM will never built a trained model like stockfish or deepseek. Certain model types seem to be suited to certain issues/types of problems or inputs. True AGI or SI would stop trying to be a grand master of everything but rather know what best method/model should be applied to a given problem. We still do not know if it is possible to combine the knowledge of different types of neural networks like LLMs, convolutional neural networks, and deep learning...and while its certainly worth exploring, it is foolish to throw all hope on a single solution approach. I think the first step would be to create a new type of model where given a problem of any type. It knows the best method to solve it. And it doesn't rely on itself but rather the mixture of agents or experts. And they don't even have to be LLMs. They could be anything.

Where this really would explode is, if the AI was able to identify a problem that it can't solve and invent or come up with a new approach, multiple approaches, because we don't have to be the ones who develop every expert.

  • wkat4242 a day ago

    Totally agree. An LLM won't be an AGI.

    It could be part of an AGI, specifically the human interface part. That's what an LLM is good at. The rest (knowledge oracle, reasoning etc) are just things that kinda work as a side-effect. Other types of AI models are going to be better at that.

    It's just that since the masses found that they can talk to an AI like a human they think that it's got human capabilities too. But it's more like fake it till you make it :) An LLM is a professional bullshitter.

    • Terr_ a day ago

      > It's just that since the masses found that they can talk to an AI like a human

      In a way it's worse: Even the "talking to" part is an illusion, and unfortunately a lot of technical people have trouble remembering it too.

      In truth, the LLM is an idiot-savant which dreams up "fitting" additions to a given document. Some humans have prepared a document which is in the form of a a theater-play or a turn-based chat transcript, with a pre-written character that is often described as a helpful robot. Then the humans launch some code that "acts out" any text that looks like it came from that fictional character, and inserts whatever the real-human-user types as dialogue for the document's human-character.

      There's zero reason to believe that the LLM is "recognizing itself" in the story, or that is is choosing to self-insert itself into one of the characters. It's not having a conversation. It's not interacting with the world. It's just coded to Make Document Bigger Somehow.

      > they think that it's got human capabilities too

      Yeah, we easily confuse the character with the author. If I write an obviously-dumb algorithm which slaps together a story, it's still a dumb algorithm no matter how smart the robot in the story is.

      • jdonaldson 20 hours ago
        3 more

        Just wanted to point out that the notion of a "document" is also an illusion to the LLM. It's processing a sequence of low dimensional spaces into another sequence of low dimensional spaces. The input spaces preserve aspects of content similarity based on co-occurrence. The model learns to transform these spaces into higher order spaces based on the outcome of training.

        You couldn't say that the model has a singular sense of self, but it certainly has been trained on data that allows it to mimic it in short spurts, and mimicry is what humans do to learn more complex/abstract tasks. The training goal is not to learn how to "be", but rather to learn how to "do" the parts necessary to continue existing.

        "Fake it till you make it" is really all that's required to exist in the world.

        • physicles 13 hours ago
          2 more

          For humans, the notion of “self” is also an illusion. We’re just atoms dancing to physics. But that’s not the right level of abstraction from which to do psychology.

          Is the notion of an LLM operating on a document useful for building an intuition about how the LLM operates? That’s the more interesting question.

          • jdonaldson 3 hours ago

            I’d say it is, although it’s not really a document but a stream of tokens. The intuition is that the model is able to build abstractions from that stream. Some of the abstractions are related to goal states of the training metric. However, it’s easy to imagine how some of the goal states pertain to the model’s survival in the “wild”, and part of the abstraction that the model captures for this goal is a reflection of itself.

      • skrebbel 20 hours ago
        3 more

        > In truth, the LLM is an idiot-savant which dreams up "fitting" additions to a given document.

        Tbh I'm not too sure that my brain works fundamentally differently. I'm an idiot-savant who responds to stimuli.

        • vrighter 19 hours ago
          2 more

          Do you know the number of atoms in my coffee cup right now?

          If you say "No, I don't" then you're already way ahead of any LLM, even the biggest ones. Remember, I asked about how many are in there right now. The only correct answer is "I don't know", because I haven't even told you whether my mug is full or empty.

          Here's copilot's answer:

              how many hydrogen atoms are there in my coffee cup right now
              Copilot said:
          
              That's an interesting question! The number of hydrogen atoms in your coffee cup depends on the volume of coffee and its composition.
          
              Let's assume you have a standard coffee cup that holds about 240 milliliters (8 ounces) of coffee. Coffee is mostly water, so we can approximate the number of hydrogen atoms based on the water content.
          
              Water (H₂O) has 2 hydrogen atoms per molecule. The molar mass of water is approximately 18 grams per mole, and 1 mole of water contains Avogadro's number of molecules, which is about (6.022 \times 10^{23}) molecules.
          
              1 milliliter of water weighs about 1 gram, so 240 milliliters of water weighs 240 grams. This is approximately ( \frac{240}{18} \approx 13.33 ) moles of water.
          
              Therefore, the number of water molecules in 240 milliliters of water is: [ 13.33 \times 6.022 \times 10^{23} \approx 8.02 \times 10^{24} \text{ molecules} ]
          
              Since each water molecule has 2 hydrogen atoms, the total number of hydrogen atoms is: [ 2 \times 8.02 \times 10^{24} \approx 1.60 \times 10^{25} \text{ hydrogen atoms} ]
          
              So, there are approximately (1.60 \times 10^{25}) hydrogen atoms in a standard coffee cup of 240 milliliters. Keep in mind this is a rough estimate, as the actual number can vary based on the exact composition of your coffee.
          
          If I wanted an explanation of how to approximate it, that's what I would have asked for. You could very probably figure that much out.
          • lostmsu 16 hours ago

            No, this is the correct answer. This is the answer I'd be looking for if I was asking that question. That you failed to specify the question is a failure on your part.

            Even stupid models understand that if I ask them the population of Denmark they only need to give rough approximation.

    • lugu a day ago

      I am not sure what you mean by LLM when you say they are professional bullshitter. While it was certainly true for model based on transformers just doing inference, recent models have progressed significantly.

      • Terr_ a day ago
        12 more

        > I am not sure what you mean by LLM when you say they are professional bullshitter.

        Not parent-poster, but an LLM is a tool for extending a document by choosing whatever statistically-seems-right based on other documents, and it does so with no consideration of worldly facts and no modeling of logical prepositions or contradictions. (Which also relates to math problems.) If it has been fed on documents with logic puzzles and prior tests, it may give plausible answers, but tweaking the test to avoid the pattern-marching can still reveal that it was a sham.

        The word "bullshit" is appropriate because human bullshitter is someone who picks whatever "seems right" with no particular relation to facts or logical consistency. It just doesn't matter to them. Meanwhile, a "liar" can actually have a harder job, since they must track what is/isn't true and craft a story that is as internally-consistent as possible.

        Adding more parts around and LLM won't change that: Even if you add some external sensors, a calculator, a SAT solver, etc. to create a document with facts in it, once you ask the LLM to make the document bigger, it's going to be bullshitting the additions.

        • Art9681 a day ago
          5 more

          I think the problem is the way you are phrasing your argument implies the LLM is always wrong. Consider a simple prompt: "Write a hello world in Python."

          Every LLM i've tested gets this correct. In my mind, it can't be both bullshit and correct.

          I would argue that the amount of real bullshit returned from an LLM is correlated to the amount of bullshit you give it. Garbage in, garbage out.

          In the end, its irrelevant if its a statistical engine or whatever semantics we want to use (glorified autocomplete). If it solved my problem in less time than I perceive I would have solved it without it, bullshit isn't the word I would use to describe the outputs.

          In all fairness though, I do get some bullshit responses.

          • lou1306 a day ago
            3 more

            It only gives you the statistically more likely way a conversation would evolve after one party says "Write a hello world in Python." It only happens to be the correct one.

            If I ask a 5yo "42 * 21 equals...?" and the kid replies with a random number, say, "882", and gets it right, it does not mean that the kid knows what multiplication is or how it works.

            • alphan0n 15 hours ago
              2 more

              ChatGPT can use a stateful python environment to do math. It isn’t confabulating the answers, it’s using a calculator.

              • mercer 13 hours ago

                I mean that's just confabulating the next token with extra steps... ime it does get those wrong sometimes. I imagine there's an extra internal step to validate the syntax there.

                I'm not arguing for or against anything specifically, I just want to note that in practice I assume that to the LLM it's just a bunch of repeating prompts with the entire convo, and after outputting special 'signifier' tokens, the llm just suddenly gets a prompt that has the results of the program that was executed in an environment. for all we know various prompts were involved in setting up that environment too, but I suspect not.

          • Terr_ a day ago

            > In my mind, it can't be both bullshit and correct.

            It's easy for bullshitters to say some true things, but it doesn't change the nature of the process that got the results. Ex:

            ________

            Person A: "The ghost of my dead gerbil whispers unto me the secrets of the universe, and I am hearing that the local volcano will not erupt today."

            Person B: "Bullshit."

            [24 hours later]

            Person A: "See? I was correct! I demand an apology for your unjustified comment."

        • widdershins 20 hours ago
          2 more

          > it does so with no consideration of worldly facts

          Why don't you consider its training set (usually the entire internet, basically) worldly facts? It's true that the training set can contain contradictory facts, but usually an LLM can recognize these contradictions and provide analysis of the different viewpoints. I don't see how this is much different from what humans can do with documents.

          The difference is that humans can do their own experiments and observations in the real world to verify or dismiss things they read. Providing an LLM with tools can, in a limited way, allow an LLM to do the same.

          Ultimately its knowledge is limited by its training set and the 'external' observations it can make, but this is true of all agents, no?

          • dartos 17 hours ago

            LLMs are trained with a data which may contain both truthful and false information.

            But at inference time it’s not referring to that data at all. Some of the data is aliased and encoded in the model’s weights, but we’re not sure exactly what’s encoded.

            It may very well be that vague concepts (like man, woman, animal, unhealthy) are encoded, but not details themselves.

            Further, at inference time, there is no kind of “referencing” step. We’ve just seen that they can sometimes repeat text they were trained on, but sometimes they just don’t.

            The LLM based systems you’re probably using do some RAG work to insert relevant information in the LLM’s context. This context still is not being referred to per se. An LLM might have a document that says the sky is red, but still insist that it’s blue (or vice versa)

            So while the info an LLM may have available is limited by its training data and the RAG system around it, none of that is guaranteed at inference time.

            There’s always a significant chance for the LLM to make up bullshit.

        • klibertp 18 hours ago

          > The word "bullshit" is appropriate because human bullshitter is someone who picks whatever "seems right" with no particular relation to facts or logical consistency.

          Not quite true - this is true for your random bullshitter, but professional bullshitters do, in fact, care for the impression of logical consistency and do have a grip on basic facts (if only so they can handwave them more effectively). As such, LLMs are definitely not yet pros at bullshitting :)

        • zaroth a day ago
          3 more

          Tell me you haven’t used the latest models, without telling me you haven’t used the latest models?

          They do hallucinate at times, but you’re missing a lot of real utility by claiming they are basically bullshit engines.

          They can now use tools, and maintain internal consistency over long context windows (with both text and video). They can iterate fully autonomously on software development by building, testing, and bug fixing on real world problems producing usable & functioning code.

          There’s a reason Microsoft is putting $80 billion dollars on the line to run LLMs. It’s not because they are full of shit!

          • enragedcacti a day ago

            Meta put $45 Billion into the Metaverse... so how much virtual real estate do you own?

          • bavell a day ago

            It's true, they're very convincing bullshitters ;)

  • daxfohl a day ago

    There's a _lot_ of smoke and mirrors. Paste a sudoku into chatgpt and ask it to solve. Amazing, it does it perfectly! Of course that's because it ran a sudoku-solving program that it pulled off github.

    Now ask it to solve step by step by pure reasoning. You'll get a really intelligent sounding response that sounds correct, but on closer inspection makes absolutely no sense, every step has ridiculous errors like "we start with options {1, 7} but eliminate 2, leaving only option 3", and then at the end it just throws all that out and says "and therefore ..." and gives you the original answer.

    That tells me there's essentially zero reasoning ability in these things, and anything that looks like reasoning has been largely hand-baked into it. All they do on their own is complete sentences with statistically-likely words. So yeah, as much as people talk about it, I don't see us as being remotely close to AGI at this point. Just don't tell the investors.

    • conception a day ago

      On the other side of the coin, I think people also underestimate the amount of human thinking and intelligence is just completing statistically likely words. Most actions and certainly reactions people do everyday involve very little reasoning. Instead just following the most used neuron.

      • seadan83 a day ago
        5 more

        Human vision works this way. To fix the latency problem (actual event hff happening vs signal transmitted to your brain) human vision is constantly predicting what you should see, your brain tells you that is what you saw (the prediction), and then the brain does reconciliation after the fact. Your brain will scramble for coherency when prediction and reality do not match. This trickery is why it seems like you see events in real time, when there is actually a significant delay between event and perception.

        Though, there are error correction mechanisms, systems for validation, and a coherent underlying model of the world that is used by tthee brain.

        FWIW, it is likely the most used set of neuron connections, sets of millions in play and their interconnections being the important part. That subset being one of billions of others with thousands of connections between each neuron - keep in mind it is not the set of neurons firing that matters, but the set of connections firing. The set of connections is a vastly large number.

        Like, if you have three neurons, your brain can encode 10 data points. Let's call these A, B,C. A firing and terminating is one (so three for each), each edge, eg A to B is another three, each set of two edges, eg A to B to C (three more), and all three edges for one more. Then keep in mind you have billions of neurons and they are each interconnected by the thousands.

        • daxfohl a day ago
          4 more

          True, and this even happens in a brain the size of a flea's.

          Which makes one wonder, what is it that makes processing and reconciling millions of visual signals per second "easy", but reasoning through a simple sudoku near impossible?

          • stevenwoo a day ago
            3 more

            Are you sure about fleas? I thought we know this does not happen with frogs as far as we can tell https://courses.csail.mit.edu/6.803/pdf/lettvin.pdf

            • daxfohl 16 hours ago
              2 more

              Oh. No, not sure.

              • stevenwoo 15 hours ago

                I do not know how many times that frog type experiment has been repeated on other species or branches of animal life but that one study up ended my preconceived notion of how vision could work with a brain, and most insects have a tiny brain, speculatively may be possible only of certain automaton type tasks, though wasps IIRC have very small brains relative to other insects but exhibit social behavior so who knows.

      • ilbeeper 21 hours ago

        Citation needed. The word reasoning isn't describing everything that the brain does, and "just following the most used neuron" is not even wrong.

  • pton_xd a day ago

    > However I would argue an LLM will never built a trained model like stockfish or deepseek.

    It doesn't have to, the LLM just needs access to a computer. Then it can write the code for Stockfish and execute it. Or just download it, the same way you or I would.

    > True AGI or SI would stop trying to be a grand master of everything but rather know what best method/model should be applied to a given problem.

    Yep, but I don't see how that relates to LLMs not reaching AGI. They can already write basic Python scripts to answer questions, they just need (vastly) more advanced scripting capabilities.

  • lukeplato a day ago

    I don't see why a mixture of experts couldn't be distilled into a single model and unified latent space

    • energy123 a day ago

      You could, but in many cases you wouldn't want to. You will get superior results with a fixed compute budget by relying on external tool use (where "tool" is defined liberally, and can include smaller narrow neural nets like GraphCast & AlphaGo) rather that stuffing all tools into a monolithic model.

      • daxfohl a day ago
        2 more

        Isn't that what the original resnet project disproved? Rather than trying to hand-manicure what the NN should look for, just make it deep enough and give it enough training data, and it'll figure things out on its own, even better than if we told it what to look out for.

        Of course, cost-wise and training time wise, we're probably a long way off from being able to replicate that in a general purpose NN. But in theory, given enough money and time, presumably it's possible, and conceivably would produce better results.

        • energy123 a day ago

          I'm not proposing hand-engineering anything, though. I'm proposing giving the AI tools, like a calculator API, a code interpreter, search, and perhaps a suite of narrow AIs that are superhuman in niche domains. The AI with tool use should outperform a competitor AI that doesn't have access to these tools, all else equal. The reason should be intuitive: the AI with tool use can dedicate more of its compute to the reasoning that is not addressed by the available tools. I don't think my views here are inconsistent with The Bitter Lesson.

    • zaroth a day ago

      Exactly what DeepSeek3 is doing.

  • phaedrus a day ago

    But the G in AGI stands for General. I think the hope is that there is some as-yet-undiscovered algorithm for general intelligence. While I agree that deferring to a subsystem that is an expert in that type of problem is the best way to handle problems, I would hope that it is possible that that central coordinator not just be able to delegate but design new subsystems as needed. Otherwise what happens when you run out of types of expert problem solvers to use (and still haven't solved the problem well)?

    One might argue maybe a mixture of experts is just the best that can be done - and that it's unlikely the AGI be able to design new experts itself. However where do the limited existing expert problem solvers come from? Well - we invented them. Human intelligences. So to argue that an AGI could NOT come up with its own novel expert problem solvers implies there is something ineffable about human general intelligence that can't be replicated by machine intelligence (which I don't agree with).

  • vrighter 19 hours ago

    Once I was high and thought of hallucinations as "noise in the output". From that perspective, and the fact that LLMs are probabilistic machines, then halving the noise would probably involve 4x the computation needed. Which seems to track what I observe. Models are getting MUCH larger, but performance is practically at a standstill.

  • Upvoter33 a day ago

    "If AGI ... is possible"

    I don't get this line of thinking. AGI already exists - it's in our heads!

    So then the question is: is what's in our heads magic, or can we build it? If you think it's magic, fine - no point arguing. But if not, we will build it one day.

    • jerojero a day ago

      The brain is such an intractable web of connections that it has been really difficult to properly make sense of it.

      We can't really talk too much about the differences between the intelligence of a dog and the intelligence of a human; in real terms. It seems as though humans might have more connections, different types of cells but then again; there's species out there that also have types of neurons we don't have and more dense regions in areas of the brain than we do.

      And on top of that, dive into a single neuron and you will find a world of complexity. The reason why a neuron might fire or not given a stimuli is an extremely complicated and often stochastic process; that's actually one of the reasons why we use non-linearities in the neural networks we create. But how nuance are we really capturing?

      The reason we do mathematics the way we do has well studied neurological patterns, we come out of the box with understandings of the world. And many animals do, actually, similar neurological patterns are found in different species.

      It's incredible to think of the precision and the complexity of the tasks a fly undertakes during their life, and we actually have mapped the entire brain (if we can call it that, i would) of a fly. Every neuron and every connection the fly has. There's experiments done with neural networks where we've tried to imitate these (the brain of a fly has less parameters [number of nodes and edges] than modern LLMs) with very interesting results. But can we say we understand them? Not really.

      And finally, I want to bring up something that's not usually considered when it comes to these things but there's a lot of processes at the molecular level in our cells that actually make use of quantum mechanics, there's a whole field of biology that's dedicated to studying these processes. So yeah, I mean, maybe we can build it but first we need to understand what's going on and why, I believe.

      • bee_rider a day ago
        2 more

        What processes in our cells make use of quantum mechanics? (I mean in some sense everything is quantum mechanics, but cells are quite big in a quantum mechanics sense. I’d imagine they are mostly classical).

    • seadan83 a day ago

      Expert beginner problem. If you can count a grain of sand, and measure the distance of one centimeter, then surely you can measure the exact length of a coastline and count the exact number grains of sand! (The length and number of grains goes to infinity as you get more detailed)

      It is less magic, just insanely complicated. We therefore very well might not build it one day. Your claim we would solve it one day is not obvious and needs solid evidence. Some cryptographic problems require millions of years of compute to solve, why cant it be the case that AGI requires petayears of compute? A billion fold increase in compute still won't do it, hence, maybe not ever. 4 billion years and a trillion fold increase in compute might not be enough. (Assuming we have that long. Dawkins was most concerned about humanity surviving the next 500 years.)

    • trescenzi a day ago

      GI is in our heads. The A is artificial which means built by humans. They are asking the same question you are.

      • 9rx 18 hours ago

        > GI is in our heads. The A is artificial which means built by humans.

        Humans aren’t built by humans? Where do humans come from, then?

        They say the kids aren’t having sex anymore, but I didn’t realize it was because they aren’t aware of the function.

  • nuancebydefault a day ago

    Indeed! That's what I have been thinking for a while but I never had the occasion and or breath to write it down, and you explained it concisely. Finally some 'confirmation' 'bias'...

  • rnr25 a day ago

    [dead]

georgestrakhov 2 days ago

IMHO, the word agent is quickly becoming meaningless. The amount of agency that sits with the program vs. the user is something that changes gradually.

So we should think about these things in terms of how much agency are we willing to give away in each case and for what gain[1].

Then the ecosystem question that the paper is trying to solve will actually solve itself, because it is already the case today that in many processes agency has been outsourced almost fully and in others - not at all. I posit that this will continue, just expect a big change of ratios and types of actions.

[1] https://essays.georgestrakhov.com/artificial-agency-ladder/

  • HarHarVeryFunny a day ago

    An agent, or something that has agency, is just something that takes some action, which could be anything from a thermostat regulating the temperature all the way up to an autonomous entity such as an animal going about it's business.

    Hugging Face have their own definitions of a few different types of agent/agentic system here:

    https://huggingface.co/docs/smolagents/en/conceptual_guides/...

    As related to LLMs, it seems most people are using "agent" to refer to systems that use LLMs to achieve some goal - maybe a fairly narrow business objective/function that can be accomplished by using one or more LLMs as a tool to accomplish various parts of the task.

    • khafra 21 hours ago

      > An agent, or something that has agency, is just something that takes some action, which could be anything from a thermostat regulating the temperature all the way up to an autonomous entity such as an animal going about it's business.

      I have seen "agency" used in a much more specific way than this: An agent is something that has goals expressed as states of a world, and has an internal model of the world, and takes action to fulfill its goals.

      Under this definition, a thermostat is not an agent. A robot vacuum cleaner that follows a list of simple heuristics is also not an agent, but a robot vacuum cleaner with a Simultaneous Location and Mapping algorithm which tries to clean the whole floor with some level of efficiency in its path is an agent.

      I think this is a useful definition. It admits a continuum of agency, just like the huggingface link; but it also allows us to distinguish between a kid on a sled, and a rock rolling downhill.

      https://www.alignmentforum.org/tag/agent-foundations has some justification and further elaboration.

  • w10-1 a day ago

    > IMHO, the word agent is quickly becoming meaningless. The amount of agency that sits with the program vs. the user is something that changes gradually

    Yes, the term is becoming ambiguous, but that's because it's abstracting out the part of AI that is most important and activating: the ability to work both independently and per intention/need.

    Per the paper: "Key characteristics of agents include autonomy, programmability, reactivity, and proactiveness.[...] high degree of autonomy, making decisions and taking actions independently of human intervention."

    Yes, "the ecosystem will evolve," but to understand and anticipate the evolution, one needs a notion of fitness, which is based on agency.

    > So we should think about these things in terms of how much agency are we willing to give away in each case

    It's unclear there can be any "we" deciding. For resource-limited development, the ecosystem will evolve regardless of our preferences or ethics according to economic advantage and capture of value. (Manufacturing went to China against the wishes of most everyone involved.)

    More generally, the value is AI is not just replacing work. It's giving more agency to one person, avoiding the cost and messiness of delegation and coordination. It's gaining the same advantages seen where smaller team can be much more effective than a larger one.

    Right now people are conflating these autonomy/delegation features with the extension features of AI agents (permitting them to interact with databases or web browsers). The extension vendors will continue to claim agency because it's much more alluring, but the distinction will likely become clear in a year or so.

    • paulryanrogers a day ago

      > Manufacturing went to China against the wishes of most everyone involved

      Certainly those in China and the executive suites of Western countries wished it, and made it happen. Arguably the western markets wanted it too when they saw the prices dropping and offerings growing.

      AI isn't happening in a vacuum. Shareholders and customers are buying it.

  • rcarmo a day ago

    I think people keep conflating agency with agents, and that they are actually two entirely different things in real life. Right now agents have no agency - they do dot independently come up with new approaches, they’re mostly task-oriented.

ocean_moist a day ago

Maybe I just don’t understand the article but I really have 0 clue how they go about making their conclusions and really don’t understand what they are saying.

I think the 5 issues they provide under “Cognitive Architectures” are severely underspecified to the point where they really don’t _mean_ anything. Because the issues are so underspeficifed I don’t know how their proposed solution solves their proposed problems. If I understand it correctly, they just want agents (Assistants/Agents) with user profiles (Sims) on an app store? I’m pretty sure this already exists on the ChatGPT store. (sims==memories/user profiles, agents==tools/plugins, assistants==chat interface)

This whole thing is so broad and full of academic (pejorative) platitudes that it’s practically meaningless to me. And of course although completely unrelated they through a reference into symbolic systems. Academic theater.

  • spiderfarmer a day ago

    This is publishing for the sake of publishing.

    • sambo546 9 hours ago

      The general negativity toward agents makes it read like the problem section of a research proposal ("X isn't good enough, we're going to develop solution Y").

  • antisthenes a day ago

    It's a 4-page paper trying to give a summary of 40+ years of research on AI.

    Of course it's going to be vague and presumptuous. It's more of a high-level executive summary for tech-adjacent folks than an actual research paper.

TaurenHunter 2 days ago

"More Agents is all you need" https://arxiv.org/abs/2402.05120

I could not find a "Agents considered harmful" related to AI, but there is this one: "AgentHarm: A benchmark for measuring harmfulness of LLM agents" https://arxiv.org/pdf/2410.09024

This "Agents considered harmful" is not AI-related: https://www.scribd.com/document/361564026/Math-works-09

  • ksplicer 2 days ago

    When reading anthropics blog on agents I basically took away that their advice is you shouldn't use them to solve most problems.

    https://www.anthropic.com/research/building-effective-agents

    "For many applications, however, optimizing single LLM calls with retrieval and in-context examples is usually enough."

    • retinaros a day ago

      True this was also my conclusion in October. Most of the complexity we are building is to fight against the limitations of LLMs. If in some way we could embed all our tools in a single call and have the LLM successfully figure out which tools to call then that would be it and we wouldn’t need any of those frameworks or libraries. But it turns out the reality of agents and tool use is pretty stark and you wouldn’t know that looking at the AI influencer spamming X, Linkedin, Youtube

      However The state of agents slightly changed and while we had 25% accuracy in multiturn conversations we re now at 50.

  • kridsdale1 2 days ago

    Morpheus taught me they are quite harmful.

  • dist-epoch a day ago

    Real agents have never been tried

bob1029 5 days ago

I think the goldilocks path is to make the user the agent and use the LLM simply as their UI/UX for working with the system. Human (domain expert) in the loop gives you a reasonable chance of recovering from hallucinations before they spiral entirely out of control.

"LLM as UI" seems to be something hanging pretty low on the tree of opportunity. Why spent months struggling with complex admin dashboard layouts and web frameworks when you could wire the underlying CRUD methods directly into LLM prompt callbacks? You could hypothetically make the LLM the exclusive interface for managing your next SaaS product. There are ways to make this just as robust and secure as an old school form punching application.

  • barrkel 2 days ago

    It's quite tedious to have to write (or even say) full sentences to express intent. Imagine driving a car with a voice interface, including accelerator, brake, indicators and so on. Controls are less verbose and dashboards are more information rich than linear text.

    It's difficult to be precise. Often it's easier to gauge things by looking at them while giving motor feedback (e.g. turning a dial, pushing a slider) than to say "a little more X" or "a bit less Y".

    Language is poorly suited to expressing things in continuous domains, especially when you don't have relevant numbers that you can pick out of your head - size, weight, color etc. Quality-price ratio is a particularly tough one - a hard numeric quantity traded off against something subjective.

    Most people can't specify up front what they want. They don't know what they want until they know what's possible, what other people have done, started to realize what getting what they want will entail, and then changed what they want. It's why we have iterative development instead of waterfall.

    LLMs are a good start and a tool we can integrate into systems. They're a long, long way short of what we need.

  • GiorgioG 2 days ago

    re: LLM as UI: Given that I don't trust LLMs to be deterministic, I wouldn't trust them to make the correct API call every time I tell it to do X.

    • kgeist 2 days ago

      I think most users have a fixed set of workflows which usually don't change from day to day, so why not just use LLMs as a macro builder with a natural language interface (and which doesn't require you to know the product's UI well beforehand):

      - you ask LLM to build a workflow for your problem

      - the LLM builds the workflow (macro) using predefined commands

      - you review the workflow (can be an intuitive list of commands, understandable by non-specialist) - to weed out hallucinations and misunderstanding

      - you save the workflow and can use it without any LLM agents, just clicking a button - pretty determenistic and reliable

      Advantages:

      - reliable, deterministic

      - you don't need to learn a product's UI, you just formulate your problem using natural language

      • bob1029 2 days ago

        > you review the workflow (can be an intuitive list of commands, understandable by non-specialist) - to weed out hallucinations and misunderstanding

        This is the idea that is most valuable from my perspective of having tried to extract accurate requirements from the customer. Getting them to learn your product UI and capabilities is an uphill battle if you are in one of the cursed boring domains (banking, insurance, healthcare, etc.).

        Even if the customer doesn't get the LLM-defined path to provide their desired final result, you still have their entire conversation history available to review. This seems more likely to succeed in practice than hoping the customer provides accurate requirements up-front in some unconstrained email context.

      • shekhargulati a day ago

        This is the same approach we took when we added LLM capability to a low code tool Appian. LLM helped us generate the Appian workflow configuration file, user reviews it and make changes if required, and then finally publishes it.

      • namaria 21 hours ago

        You're just describing programming with the extra step of going through a high entropy and low bandwidth channel of natural language and hand waving that problem away.

        We can "just" write code as well, as we have been doing for several decades.

      • nyrikki 2 days ago

        So visual programming x.0?

        I am pretty sure PLCs with ladder logic are about the limits of the traditional visual/macro model?

        Word-sense disambiguation is going to be problematic with the 'don't need to learn' part above.

        Consider this sentence:

        'I never said she stole my money'

        Now read that sentence multiple times, puting emphasis on each word, one at a time and notice how the symantic meaning changes.

        LLMs are great at NLP, but we still don't have solutions to those NLU problems that I am aware of.

        I think to keep maximum generality without severely restricted use cases that a common DSL would need to be developed.

        There will have to be tradeoffs made, specific to particular use cases, even if it is better than Alexa.

        But I am thinking about Rice's theorm and what happens when you lose PEM.

        Maybe I just am too embedded in an area where these problems are a large part of the difficulty for macro style logic to provide much use.

      • dingnuts 2 days ago
        6 more

        >- you review the workflow (can be an intuitive list of commands, understandable by non-specialist)

        so you define a DSL that the LLM outputs, and that's the real UI

        >- you don't need to learn a product's UI, you just formulate your problem using natural language

        yes, you do. You have to learn the DSL you just manifested so that you can check it for errors. Once you have the ability to review the LLM's output, you will also have the ability to just write the DSL to get the desired behavior, at which point that will be faster unless it's a significant amount of typing, and even then, you will still need to review the code generated by the LLM, which means you have to learn and understand the DSL. I would much rather learn a GUI than a DSL.

        You haven't removed the UI, nor have you made the LLM the UI, in this example. The DSL ("intuitive list of commands.. I guess it'll look like the Robot Framework right? that's what human-readable DSLs tend to look like in practice) is the actual UI.

        This is vastly more complicated than having a GUI to perform an action.

        • kgeist 2 days ago
          5 more

          I never said the user must be exposed to a DSL, I think you're overcomplicating it for the sake of overcomplicating. DSL can be used under the hood by the execution engine, but the user can be exposed to a simpler variant of it, either by clever hardcoded postprocessing of known commands when rendering the final result for human review, or maybe use the LLM itself to summarize the planned actions (although it can hallucinate while summarizing, but the chance is miniscule, especially if a user can test a saved workflow). My point was mostly about two things:

          1) "it's unpredictable each time" - it won't be, if a workflow is saved and tested, because when it's run, no LLM is involved anymore in decision making

          2) I did remove the UI, because I don't need to learn the UI, I just formulate my problem and the LLM constructs a possible workflow which solves my problem out of predefined commands known to the system.

          Sure this is most useful for more complex apps. In our homegrown CRM/ERP, users have lots of different workflows depending on their department, and they often experiment with workflows, and today they either have to click through everything manually (wasting time) or ask devs to implement the needed workflow for them (wasting time). If your app has 3 commands on 1 page then sure, it's easier to do it using GUI.

          Also IMHO it can be used alongside with GUI, it doesn't need to replace it, I think it's great for discoverability/onboarding and automation, but if you want to click through everything manually, why not.

          • svieira 2 days ago
            4 more

            The bit you are missing is that "known to the system" is not enough, as the consumer I need to _verify the logic_, which means that at some level, I do have to read the DSL (just as I have to read the Java, not, in general, the actual assembly emitted by the JIT). Which means that the DSL is actually the product here (though the LLM may make it easier to learn that DSL and in some cases to write something in it).

            • kgeist 2 days ago
              3 more

              1) You don't need to read the DSL in the raw form if you use a language model to convert it to a few paragraphs in natural language.

              2) You can test the created workflow on a bunch of test data to verify it works as intended. After a workflow is created, it's deterministic (since we don't use LLMs anymore for decision making), so it will always work the same.

              Sure we can expose DSL to power users as an option, but is reading the raw DSL really required for the majority of cases?

              • vrighter 18 hours ago

                NAtural language isn't precise enough to describe exactly what's happening. If you do try to use natural language for that purpose, trying to eliminate ambiguity, you end up with legalese. And people can't read legalese, even though it's technically "plain english"

              • svieira 2 days ago

                1. Now you have two problems (did the writer translate what I said correctly and did the summarizer translate what the writer wrote correctly).

                2. This is absolutely true and it does help somewhat. However, writing the test cases is now your bottleneck (and you're writing them as a substitute for being able to read a reliable high-level summary of what the workflow actually is).

    • hitchstory 2 days ago

      I dont either, but this can be mitigated by adding guard rails (strictly validating input), double checking actions with the user and using it for tasks where a mistake isnt world ending.

      Even then mistakes can slip through, but it could still be more reliable than a visual UI.

      There are lots of horrible web UIs i would LOVE to replace with a conversational LLM agent. No #1 is jira and so is no #2 and #3.

    • deadbabe 2 days ago

      They are deterministic at 0 temperature

      • lokhura 2 days ago

        At zero temp there is still non-determism due to sampling and the fact that floating point addition is not commutative so you will get varying results due to parallelism.

      • BalinKing 2 days ago
        2 more

        (Disclaimer: I know literally nothing about LLMs.) Wouldn't there still be issues of sensitivity, though? Like, wouldn't you still have to ensure that the wording of your commands stays exactly the same every time? And with models that take less discrete data (e.g. ChatGPT's new "advanced voice model" that works on audio directly), this seems even harder.

        • BalinKing a day ago

          s/advanced voice model/advanced voice mode/ (too late for me to edit my original comment)

      • wkat4242 a day ago

        They are pretty deterministic then but they are also pretty useless at 0 temperature.

      • ukuina a day ago

        Not for the leading LLMs from OpenAI and Anthropic.

      • vrighter 18 hours ago

        Not really, not in practice. The order of execution is non-deterministic when running on a cluster or a gpu, or more than one core of the CPU and rounding errors propagate differently on each run.

  • pwillia7 2 days ago

    I had the same epiphany about LLM as UI trying to build a front end for a image enhancer workflow I built with Stable Diffusion. I just about fully built out a Chrome extension and then realized I should just build a 'tool' that llama can interact with and use open webui as the front end.

    quick demo: https://youtu.be/2zvbvoRCmrE

  • diggan 2 days ago

    > I think the goldilocks path is to make the user the agent and use the LLM simply as their UI/UX for working with the system

    That's a funny definition to me, because doing so would mean the LLM is the agent, if you use the classic definition for "user-agent" (as in what browsers are). You're basically inverting that meaning :)

  • klabb3 2 days ago

    > "LLM as UI" seems to be something hanging pretty low on the tree of opportunity.

    Yes if you want to annoy your users and deliberately put roadblocks to make progress on a task. Exhibit A: customer support. They put the LLM in between to waste your time. It’s not even a secret.

    > Why spent months struggling with complex admin dashboard layouts

    You can throw something together, and even auto generate forms based on an API spec. People don’t do this too often because the UX is insufficient even for many internal/domain expert support applications. But you could and it would be deterministic, unlike an LLM. If the API surface is simple, you can make it manually with html & css quickly.

    Overuse of web frameworks has completely different causes than ”I need a functional thing” and thus it cannot be solved with a different layer of tech like LLMs, NFTs or big data.

    • wkat4242 a day ago

      > Yes if you want to annoy your users and deliberately put roadblocks to make progress on a task. Exhibit A: customer support. They put the LLM in between to waste your time. It’s not even a secret.

      No this is because they use the LLM not only as human interface but also as a reasoning engine for troubleshooting. And give it way less capability than a human agent to boot. So all it can really do is serve FAQs and route to real support.

      In this case the fault is not with the LLM but with the people that put it there.

danielmarkbruce 2 days ago

Why post this paper? It says nothing, it's a waste of people's time to read.

  • duxup 2 days ago

    Even just the definition of an Agent (maybe imperfect) made it worthwhile for me.

    • danielmarkbruce 2 days ago

      I'm not sure it's even good though... the input doesn't need to come from a user. I have an "agent" which listens for an event in financial markets and then goes and does some stuff.

      In practice the current usage of "agent" is just: a program which does a task and uses an LLM somewhere to help make a decision as to what to do and maybe uses an LLM to help do it.

pwillia7 2 days ago

How would the SIMS that contain the user prefs and whatnot not have the same issues described in the paper as the agents themselves?

nowittyusername 2 days ago

With time, they will get a lot better. IMO, the biggest hurdles the agents currently lack is good implementation of function calling capabilities. LLM's should be used as reasoning engines and everything else should be offloaded to tool use. This will drastically reduce hallucinations and errors in math and all the other areas.

  • lionkor a day ago

    Do they reason, though?

ripped_britches 2 days ago

I can imagine really powerful agents this year or next in theory. Agents meaning (not a thermostat) a system that can go complete some async tasks on your behalf. But in practice I don’t have any idea how we will solve for prompt injection attacks. Hopefully someone cracks it.

  • Jerrrry a day ago

      >solve for prompt injection attacks
    
    It is essentially the same Code as Data problem as always.
  • cratermoon 2 days ago

    "AI will soon be able too..."

rcarmo a day ago

This reads a lot like agents wearing different kinds of trenchcoats (or underwear). Can’t really see an evidence this separation works.

coro_1 2 days ago

The paper covers technical details and the logistics of AI Agents to come. But how are humans going to react to mass AI Agents replacing other human emotion and connection? Bias is central in tech-culture to only agents, but this could become an issue.

authorfly 21 hours ago

Does anyone else get the sense that the definition has been bastardized by the conflation of the two concurrent previous uses of "agent"?

i.e. in AI, biology and informatics, "Agent" typically meant something: That had a form / self / embodiment. That could sense the environment and react to those perceptions. That possibly could learn, adapt, or change to various degrees of complexity, which would entail optionally being an "intelligent system".

Meanwhile in common parlance, Agent meant: Someone who acts or behaves on behalf of another adaptively to accomplish something with some degree of freedom.

And this might explain why so people say agent/agentic necessarily refers to "tool use" or "being able to overcome problems on the happy path" or "something capable of performing actions on an infinite loop while reacting" (the latter two in my opinion, conflates the meaning of "Intelligent system" or "Intelligent behavior"). Meanwhile, biologists might still reply to a single cell seemingly inert, or a group of bacteria in a colony, as an Agent (a more behaviouralist/chemical "look-deep-down" perspective)

I think a lot of disappointment is that biologists/OG AI enthusiasts are looking for something truly adaptive, sensing, able to behave, "live" indefinitely, have acquire or set goals, and which might be able to if intelligent, work with other agents to accomplish things (e.g. a "society"). Meanwhile, people who just want an "AI HR Agent" just want something that can communicate, interview, discern good applicants, and book the interviews plus provide summary notes. These two things are very different. But both, could use tools etc (the key difference from ChatGPT which is enabling this new concept to be more useful than ChatGPT, alongside various forms of short term memory rather than "fresh-every-time-conversations).

syntex a day ago

Why does this have so many upvotes? Is this the current state of research nowadays?

asciii a day ago

Diabolical - I love it. Impressed that the final score came up as an alert!

jokethrowaway 2 days ago

I don't get the hype about Agents.

It's just calling a LLM n-times with slightly different prompts

Sure, you get the ability to correct previous mistakes, it's basically a custom chain of thought - but errors compound and the results coming from agents have a pretty low success rate.

Bruteforcing your way out of problems can work sometimes (as evinced by the latest o3 benchmarks) but it's expensive and rarely viable for production use.

  • mindcrime a day ago

    > It's just calling a LLM n-times with slightly different prompts

    That's one way of building something you could call an "agent". It's far from the only way. It's certainly possible to build agents where the LLM plays a very small role, or even one that uses no LLM at all.

  • grahamj 2 days ago

    > It's just calling a LLM n-times with slightly different prompts

    It can be, but ideally each agent’s model, prompts and tools are tailored to a particular knowledge domain. That way tasks can be broken down into subtasks which are classified and passed to the agents best suited to them.

    Agree RE it being bruteforce and expensive but it does look like it can improve some aspects of LLM use.

    • retinaros a day ago

      That is just like having a for loop per domain.

j45 2 days ago

Math that can't be too warm and too accurate to work may have challenges being too accurate and reliable with repeating processes.

simonw 2 days ago

This paper does at least lead with its version of what "agents" means (I get very frustrated when people talk about agents without clarifying which of the many potential definitions they are using):

> An agent, in the context of AI, is an autonomous entity or program that takes preferences, instructions, or other forms of inputs from a user to accomplish specific tasks on their behalf. Agents can range from simple systems, such as thermostats that adjust ambient temperature based on sensor readings, to complex systems, such as autonomous vehicles navigating through traffic.

This appears to be the broadest possible definition, encompassing thermostats all the way through to Waymos.

  • adpirz 2 days ago

    You posted on X a while back asking for a crowdsourced definition of what an "agent" was and I regularly cite that thread as an example of the fact that this word is so blurry right now.

    • simonw 2 days ago

      I really need to write that up in one place - closest I've got is this section from my 2024 review https://simonwillison.net/2024/Dec/31/llms-in-2024/#-agents-...

      • DebtDeflation a day ago

        >"The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem."

        This is exactly the problem and these two categories nicely sum up the source of the confusion.

        I consider myself in the former camp. The AI needs to determine my intent (book a flight) which is a classification problem, extract out the relevant information (travel date, return date, origin city, destination city, preferred airline) which is a Named Entity Recognition problem, and then call the appropriate API and pass this information as the parameters (tool usage). I'm asking the agent to perform an action on my behalf, and then it's taking my natural language and going from there. The overall workflow is deterministic, but there are elements within it that require some probabilistic reasoning.

        Unfortunately, the second camp seems to be winning the day. Creating unrealistic expectations of what can be accomplished by current day LLMs running in a loop while simultaneously providing toy examples of it.

      • ethbr1 a day ago

        The language problem around agents is that most companies are describing them solely from a human/UX perspective.

        'You ask it to do something, and it does it'

        That makes it difficult to differentiate the more critical 'how' options in the execution process. From that perspective: deterministic integrations, LLM+tools, LAM, etc are more descriptive categories, each with their own capabilities, strengths, and weaknesses.

        Or to put it a different way, if the term doesn't tell you what something is good and bad at, it's probably an underspecified term.

      • adpirz 2 days ago

        This write up was also fantastic and has made the rounds at our org!

      • jvans 2 days ago

        this is a great write up, thank you

    • mindcrime a day ago

      It's been blurry for a long time, FWIW. I have books on "Agents" dating back to the late 90's or early 2000's in which the "Intro" chapter usually has a section that tries to define what an "agent" is, and laments that there is no universally accepted definition.

      To illustrate: here's a paper from 1996 that tries to lay out a taxonomy of the different kinds of agents and provide some definitions:

      https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...

      And another from the same time-frame, which makes a similar effort:

      https://www.researchgate.net/profile/Stan-Franklin/publicati...

      • ethbr1 a day ago

        The technical difference between agents then and agents now are the fuzzy parameter mapping capabilities of LLMs, if used.

        Scaling agent capability requires agents that are able to auto-map various tools.

        If every different tool is a new, custom integration, that must be written by a person, then we end up where we are today -- specialized agents where there exists enough demand and stability to write and maintain those integrations, but no general purpose agents.

        Ultimately, parameter mapping in a sane, consistent, globally-applicable way is the key that unlocks an agentic future, or a failure that leads to its demise.

    • yyyyz 2 days ago

      [dead]

  • williamcotton 2 days ago

    So basically just the concept of feedback in a cybernetic system.

    https://en.wikipedia.org/wiki/Cybernetics

    • bob1029 2 days ago

      > The field is named after an example of circular causal feedback—that of steering a ship (the ancient Greek κυβερνήτης (kybernḗtēs)...

      Now that name makes a lot more sense to me.

      • openrisk 2 days ago

        Which is also the root of the word 'government', so a government agent is doubly cybernetic in a sense

    • 8338550bff96 a day ago

      Then is "agents" just non-spooky coded language for "cyborgs"

      • throw5959 a day ago

        I studied cybernetics. Our teachers called us "cybernets".

  • curious_cat_163 2 days ago

    Yes, and the definition works reasonably well for the core arguments they are making in Section 5.

    I suspect they'll follow up with a full paper with more details (and artifacts) of their proposed approach.

  • htrp 2 days ago

    agents are the 2020s version of data science in the 2010s

    • Kerbonut 2 days ago

      Do you mean that agents are being hyped in the same way data science was in the 2010s, or that they’ll have a similar impact over time? Would love to hear more of your thoughts.

      • snapcaster 2 days ago
        3 more

        I think he meant it's a similarly blurry term

        • simonw a day ago
          2 more

          Yeah, what does "data science" mean, exactly?

          • throw5959 a day ago

            Using the scientific method to handle data.

  • baxtr 2 days ago

    To me "Agents" sound like computer programs that interact through APIs?

  • cratermoon 2 days ago

    We're at the phase of the hype cycle where "agent" means whatever the marketing materials want it to mean.

  • behnamoh 2 days ago

    People have been talking about agents for at least 2 years. Remember when AgentGPT came out? How's that going so far? Agents are just LLMs with structured output, which often happens to be a JSON with info about a function arguments to be called.

    • mindcrime a day ago

      > People have been talking about agents for at least 2 years.

      WAY longer than that. What's come to the forefront specifically in the last year or two is very specific subset of the overall agent landscape. What I like to call "LLM Agents". But "Agents" at large date back to at least the 1980's if not before. For some of the history of all of this, see this page and some of the listed citations:

      https://en.wikipedia.org/wiki/Software_agent

      > Agents are just LLMs with structured output

      That's only true for the "LLM Agent" version. There are Agents that have nothing to do with LLM's at all.

      • simonw a day ago
        2 more

        Right - the term "user-agent" shows up in the HTTP/1.0 spec from 1996: https://datatracker.ietf.org/doc/html/rfc1945 and there's plenty of history of debates about the meaning of the term before then.

        In 1994 people were already complaining that the term that had no universal agreed definition: https://simonwillison.net/2024/Oct/12/michael-wooldridge/

        • mindcrime a day ago

          Yes. I am fond of saying "If you're talking about agents and think the term is something new, go back and read everything Michael Wooldridge ever wrote before talking any further". :-)

  • bsenftner a day ago

    Oh come on! You and I know very well an AI Agent is anything marketing says they are, and that is the absolute final truth.

DebtDeflation a day ago

This whole idea of prompting an LLM and piping the output as the input (prompt) of another LLM and asking it to do something with it (like critique/edit it) and then piping the output of that LLM back to the first LLM along with instructions to keep repeating the process until some stop criteria is met seems to me to just be a money-making scheme to drive up token consumption.

zombiwoof 2 days ago

Agent is a funding and marketing term imho

Soon it will be AI Microservices

  • bad_haircut72 2 days ago

    Who wants to invest in my startup, its a Microagent service architectures orchestration platform. All you do is define the inputs, write the agents algorithms, apply agency by inputting a decision tree (ifs and conditionals) and then a function to format output! And the best part? You do all of it in YAML!

    /sarcasm, hopefully obviously

    • mindcrime a day ago

      I was thinking "shut up and take my money" until you brought YAML into it. Hard pass. ;p

  • ramesh31 a day ago

    >Agent is a funding and marketing term imho

    So was "mobile" 15 years ago. Companies are deploying hundreds of billions in capital for this. It's not going anywhere, and you'd be best off upskilling now instead of dismissing things.