The personality thing seems kind of tautological / uninteresting, as I have pointed out before: https://news.ycombinator.com/item?id=46905692.
Psychological instruments and concepts (like MBTI) are constructed from the semantics of everyday language. Personality models (being based on self-report, and not actual behaviour) are not models of actual personality, but the correlation patterns in the language used to discuss things semantically related to "personality". It would be thus extremely surprising if LLM-output patterns (trained on people's discussions and thinking about personality) would not also result in learning similar correlational patterns (and thus similar patterns of responses when prompted with questions from personality inventories).
The real and more interesting part of the paper is the use of statistical techniques to isolate sub-networks which can then be used to emit outputs more consistent with some desired personality configuration. There is no obvious reason to me that this couldn't be extended to other types of concepts, and it kind reads to me like a way of doing a very cheap, training-free sort of "fine-tuning".
Some sort of software like ComfyUI with variable application of model specific personality traits would be great - increase conscientiousness, decrease neuroticism, increase openness, etc. Make it agentic; have it do intermittent updates based on a record of experiences, and include all 27 emotional categories, with an autonomous update process so it adapts to interactions in real time: https://www.pnas.org/doi/10.1073/pnas.1702247114
Could be very TARS like, lol.
It'd also be interesting to do a similar rolling record of episodic memory, so your agent has a more human like memory of interactions with you.
Another thing to consider about LLMs is that the nature of the training and the core capability of transformers is to mimic the function of the processes by which the training data was produced; by training on human output, these LLMs are in many cases implicitly modeling the neural processes in human brains which resulted in the data. Lots of hacks, shortcuts, low resolution "good enough" approximations, but in some cases, it's uncovering precisely the same functions that we use in processing and producing information.
> Another thing to consider about LLMs is that the nature of the training and the core capability of transformers is to mimic the function of the processes by which the training data was produced; by training on human output, these LLMs are in many cases implicitly modeling the neural processes in human brains which resulted in the data. Lots of hacks, shortcuts, low resolution "good enough" approximations, but in some cases, it's uncovering precisely the same functions that we use in processing and producing information.
I would argue this is deeply false, my classic go-to examples being that neural networks have almost no real relations to any aspects of actual brains [1] and that modeling even a single cortical neuron requires an entire, fairly deep neural network [2]. Neural nets really have nothing to do with brains, although brains may have loosely inspired the earliest MLPs. Really NNs are just very powerful and sophisticated curve (manifold) fitters.
> Could be very TARS like, lol.
I just rewatched Interstellar recently and this is such a lovely thought in response to the paper!
[1] https://en.wikipedia.org/wiki/Biological_neuron_model
[2] https://www.sciencedirect.com/science/article/pii/S089662732...
Agreed.
Everything in a model is a correlation of behavior with context and context with behavior.
"Mind set" is a factor across the continuum of scales.
Are we solving a math problem or deciding on entertainment? We become entirely "different brains" in those different contexts, as we configure our behavior and reasoning patterns accordingly.
The study is still interesting. The representation, clustering, and bifurcations of roles may simply be one end of a continuum, but they are still meaningful things to specifically investigate.
Thank you, I came here to say so much in less eloquent terms.
It's not surprising to find clustered sentiment from a slice of statistically correlated language. I wouldn't call this a "personality" any more than I would say the front grill of a car has a "face".
Deterministically isolating these clusters however, could prove to be an incredibly useful technique for both using and evaluating language models.
It's not even really the researchers' fault, academic psychological personality research is in general philosophically very weak / poor, in that they also almost always conflate "models of / talking about personality" with actual personality, and rarely actually check if things like the MBTI or Five-Factor Model actually correlate meaningfully with real behaviours.
Those that do find correlations between self-reported personality and actual behaviours tend to find those to be in a range of something like 0.0 to 0.3 or so, maybe 0.4 if you are really lucky. Which means "personality" measured this way is explaining something like 16% of the variance in behaviour, at max.
I don’t think this is even limited to this part of academia - or academia at all, but I do think it’s a bit irresponsible of them to assume prior rigor in those personality tests.
On top of that, a confounding issue is that human nature is to anthropomorphize things. What is more likely to be anthropomorphized than a construct of written language - the now primary method of knowledge transfer between humans? I can’t help but feel that this wishful bias contributes to missing the due diligence of choosing an appropriate metric with which to measure.
Yup, I agree it is a general problem, and related to a tendency to over-anthropomorphize. At least in this case there was still something pretty good in the paper anyway.