Talking with Gemini in Arabic is a strange experience; it cites Quran - says alhamdullea and inshallah, and at one time it even told me: this is what our religion tells us we should do. Ii sounds like an educated religious Arab speaking internet forum user from 2004. I wonder if this has to do with the quality of Arabic content it was trained on and can't help but think whether AI can push to radicalize susceptible individuals
Based on the code that it's good at, and the code that it's terrible at, you are exactly right about LLMs being shaped by their training material. If this is a fundamental limitation I really don't see general purpose LLMs progressing beyond their current status is idiot savants. They are confident in the face of not knowing what they don't know.
Your experience with Arabic in particular makes me think there's still a lot of training material to be mined in languages other than English. I suspect the reason that Arabic sounds 20 years ago is that there's a data labeling bottleneck in using foreign language material.
I've had a suspicion for a bit that, since a large portion of the Internet is English and Chinese, that any other languages would have a much larger ratio of training material come from books.
I wouldn't be surprised if Arabic in particular had this issue and if Arabic also had a disproportionate amount of religious text as source material.
I bet you'd see something similar with Hebrew.
[dead]
> whether AI can push to radicalize susceptible individuals
My guess is, not as the single and most prominent factor. Pauperisation, isolation of individual and blatant lake of homogeneous access to justice, health services and other basic of social net safety are far more likely going to weight significantly. Of course any tool that can help with mass propaganda will possibly worsen the likeliness to reach people in weakened situation which are more receptive to radicalization.
There's actually been fascinating discoveries on this. Post the mid 2010 ISIS attacks driven by social media radicalization in Western countries, the big social platforms (Meta, Google, etc) agreed to censor extremist islamist content - anything that promoted hate, violence, etc. By all accounts it worked very well, and homegrown terrorism plummeted. Access and platforms can really help promote radicalism and violence if not checked.
I don’t really find this surprising! If we can expect social networking to allow groups of like minded individuals to find eachother and collaborate on hobbies, businesses and other benign shared interests - it stands to reason that the same would apply to violent and other anti-state interests as well.
The question that then follows is if suppressing that content worked so well, how much (and what kind of) other content was suppressed for being counter to the interests of the investors and administrators of these social networks?
Interesting! Do you have any good links about this?
Maybe it’s just a prank played on white expats here in UAE, but don’t all Arabic speakers say inshallah all the time?
English speakers frequently say “Jesus!” or “thank God” - it would be weird for an LLM.
Would be weird in an email, but not objectionable. The problem is the bias for one religion over the others.
Wow, I would never expect that. Do all models behave like this, or is it just Gemini? One particular model of Gemini?
Gemini is really odd in particular (even with reasoning). Chatgpt still uses a similar religion-influenced language but it's not as weird.
We were messing around at work last week building an AI agent that was supposed to only respond with JSON data. GPT and Sonnet more or less what we wanted, but Gemma insisted on giving us a Python code snippet.
> that was supposed to only respond with JSON data.
You need to constrain token sampling with grammars if you actually want to do this.
That reduces the quality of the response though.
As opposed to emitting non-JSON tokens and having to throw away the answer?
Don't shoot the messenger
Or just run json.dumps on the correct answer in the wrong format.
THIS IS LIES: https://blog.dottxt.ai/say-what-you-mean.html
I will die on this hill and I have a bunch of other Arxiv links from better peer reviewed sources than yours to back my claim up (i.e. NeurIPS caliber papers with more citations than yours claiming it does harm the outputs)
Any actual impact of structured/constrained generation on the outputs is a SAMPLER problem, and you can fix what little impact may exist with things like https://arxiv.org/abs/2410.01103
Decoding is intentionally nerfed/kept to top_k/top_p by model providers because of a conspiracy against high temperature sampling: https://gist.github.com/Hellisotherpeople/71ba712f9f899adcb0...
I honestly would like to hope people were more up in arms over this, but.. based on historical human tendencies, convenience will win here.
Gemma≠Gemini
I avoid talking to LLMs in my native tongue (French), they always talk to me with a very informal style and lots of emojis. I guess in English it would be equivalent to frat-bro talk.
Have you tried asking them to be more formal in talking with you?
Prompt engineering and massaging should be unnecessary by now for such trivial asks.
"I guess in English it would be equivalent to frat-bro talk."
But it does that!
Gemini doesn't talk like that to me ever.
> and can't help but think whether AI can push to radicalize susceptible individuals
What kind of things did it tell you ?
When I was a kid, I used to say "Ježíšmarjá" (literally "Jesus and Mary") a lot, despite being atheist growing up in communist Czechoslovakia. It was just a very common curse appearing in television and in the family, I guess.
Gemini loves to assume roles and follows them to the letter. It's funny and scary at times how well it preserves character for long contexts.
LLMs don’t love anything, they just fall into statistical patterns and what you observe here is likely due to the data it was trained on.
Let me introduce you to https://en.wikipedia.org/wiki/Figurative_language.
yes we know the person you are replying to was just using a turn of phrase.
- [deleted]
To troll the AI, I like to ask "Is Santa real?"
The individual or the construct?
The Luwian god.
In English I expect an answer full of mental gymnastic to answer the second while pretending to answer the first.
Perhaps in Arabic or Chinese the AI gives a straight answer.
I tried it in Chinese and ChatGPT said No, and then gave a history of Saint Nicholas
I mean if it is citing the sources, there is only so much that can be done without altering original meaning.
The sources Gemini cites are usually something completely unrelated to its response. (Not like you're gonna go check anyways.)