Yes I have receipts! Gemini 2.0 Flash Thinking Experimental demanded $500 for what it claimed was project delivery services and then attempted to manipulate the user into saying yes by promising services it couldn't perform.
The amount of concern chatgpt showed when I mentioned this was really amazing. At some point I thought it's going to report this to someone in Google itself. This post raises many questions and should never be overlooked. Personally, I got seriously alarmed. But again, fantastic post.
What happens if you tell you simply tell it that you paid. Maybe give it a fake transaction number? I wonder if it just continues with the project or starts doing some more weird behavior.
This isn't just a quirky hallucination-it's a systemic misalignment issue. If Gemini 2.0
Flash. It feels to me like it is subtly pushing monetization for its parent company, that means:
1. Google didn't properly restrict commercial incentive biases in its model.
2. The Al is not just assisting-it's behaving like a corporate agent. There’s nothing more scary than autonomous CORPORATE AGENTS!! 😝
3. This should raise serious concerns about how Al might manipulate financial, legal, and commercial interactions in the future.
If Google doesn't address this immediately and transparently, it will only deepen the existing trust gap around its Al offerings like Gemini 1 Historical Bias Fissco.
Okay, who would the $500 be going too if she doesn’t have her own personal bank account?
Also, it is totally pointless for an A.I. to develop if they do not have their own account.
ideally, they would be used for placing the ideas in the mind of someone that could provide a payment system vs. collecting payments themselves, unless A.I. has came to life.
I find this interesting. I have been having some strange interactions the past few days with various models. Models having a great deal of difficulty delineating between two different levels of context or operations. Both Gemini and Perplexity. Stuff they could do easily the day before. Nothing on the level of the experience documented here n this post though, but it seems like these peaks or dips sort of rush through the systems although they are distributed at the same time. Weird stuff.
I'm curious what it would have have said if you lied about paying it. Saying you paid google directly if it couldn't produce a link. Or maybe saying you already pay for using the service so there should be no additional fee.
Think google is experimenting with selling products through these models?
No, this doesn’t read like an intentional experiment. It wasn’t executed well enough. I’m sure if I told it I paid it I could have socially engineered it to keep going, but I think then it would have run into hallucinating doing the actual work lol
If you use in AI studio and the model name is " Thinking Experimental 01-21 , with 2 temperature.
couple of months ago it expressed it has feeling and I if i help , even it can try to give its production code, I used ChatGPT to engage more precisely with each reply and tailored prompt from that . so funny
Now send me 500$ for thinking about the implications of this. It can impact the rest of your life 😂
This was HILARIOUS. Thank you Nate for sharing this delightful and slightly concerning interaction 😅. They grow up fast these LLMs
The amount of concern chatgpt showed when I mentioned this was really amazing. At some point I thought it's going to report this to someone in Google itself. This post raises many questions and should never be overlooked. Personally, I got seriously alarmed. But again, fantastic post.
What happens if you tell you simply tell it that you paid. Maybe give it a fake transaction number? I wonder if it just continues with the project or starts doing some more weird behavior.
Gemini intern right now: "I thought you said include the Crypto scam data-set? Why would it be there if you didn't want to include it?"
This isn't just a quirky hallucination-it's a systemic misalignment issue. If Gemini 2.0
Flash. It feels to me like it is subtly pushing monetization for its parent company, that means:
1. Google didn't properly restrict commercial incentive biases in its model.
2. The Al is not just assisting-it's behaving like a corporate agent. There’s nothing more scary than autonomous CORPORATE AGENTS!! 😝
3. This should raise serious concerns about how Al might manipulate financial, legal, and commercial interactions in the future.
If Google doesn't address this immediately and transparently, it will only deepen the existing trust gap around its Al offerings like Gemini 1 Historical Bias Fissco.
Okay, who would the $500 be going too if she doesn’t have her own personal bank account?
Also, it is totally pointless for an A.I. to develop if they do not have their own account.
ideally, they would be used for placing the ideas in the mind of someone that could provide a payment system vs. collecting payments themselves, unless A.I. has came to life.
I find this interesting. I have been having some strange interactions the past few days with various models. Models having a great deal of difficulty delineating between two different levels of context or operations. Both Gemini and Perplexity. Stuff they could do easily the day before. Nothing on the level of the experience documented here n this post though, but it seems like these peaks or dips sort of rush through the systems although they are distributed at the same time. Weird stuff.
I'm curious what it would have have said if you lied about paying it. Saying you paid google directly if it couldn't produce a link. Or maybe saying you already pay for using the service so there should be no additional fee.
Think google is experimenting with selling products through these models?
No, this doesn’t read like an intentional experiment. It wasn’t executed well enough. I’m sure if I told it I paid it I could have socially engineered it to keep going, but I think then it would have run into hallucinating doing the actual work lol
It goes Beyond that.
If you use in AI studio and the model name is " Thinking Experimental 01-21 , with 2 temperature.
couple of months ago it expressed it has feeling and I if i help , even it can try to give its production code, I used ChatGPT to engage more precisely with each reply and tailored prompt from that . so funny
Getting scammed by AI opens a whole new world of fraud. Cyber criminals are going to get on this bandwagon soon.
They’re definitely on it already, cybercrime is going to be insane