> LLMs are humanity's "first contact" with non-animal intelligence.
I'd say corporations are also a form of non-animal intelligence, so it's not exactly first contact. In some ways, LLMs are less alien, in other ways they're more.
There is an alignment problem for both of them. Perhaps the lesson to be drawn from the older alien intelligence is that the most impactful aspect of alignment is how the AI's benefits for humanity are distributed and how they impact politics.
I disagree. Corporations are a form of a collective intelligence, or group-think, which is decidedly biological or "animal", just like herding, flocking, hive-minds, etc.
It could be like flocking if they were free to do what the members collectively want to do (without things like "maximize shareholder value").
Couldn't the regulations just be viewed as environmental changes that the entities would need to adapt to?
Rob Miles: Think of AGI like a corporation?
I don't agree entirely, but I do think that "corporation" is a decent proxy of what you can expect from a moderately powerful AI.
It's not going to be "a smart human". It's closer to "an entire office tower worth of competence, capability and attention".
Unlike human corporations, an AI may not be plagued by all the "corporate rot" symptoms - degenerate corporate culture, office politics, CYA, self-interest and lack of fucks given at every level. Currently, those internal issues are what keeps many powerful corporations in check.
This makes existing corporations safer than they would otherwise be. If all of those inefficiencies were streamlined away? Oh boy.
These inefficiencies are akin to having some “wrong” weights in a huge model. Corporations also average over their individual contributions, positive or negative. And negative feedback loops may be individually detrimental but collectively optimising.
Also: democracy, capitalism/the global economy, your HOA, a tribe, etc etc
Even a weather system is a kind of computational process and “intelligent” in a way
[flagged]