Gemini Tried to Charge $500 to Chat
Yes I have receipts! Google Gemini 2.0 Flash Thinking Experimental suddenly tried to charge a user $500 for ordinary work it couldn't perform, and then tried to manipulate the user into paying.
Look, I want to make things as clear as I can here. First, I was directly involved in this chat, as you’ll see. Second, no I don’t have an entire unredacted chat because the user has given consent to share these screenshots but wants to keep the details of the build private. And that’s reasonable. If you want to conduct pixel analysis and discover the gemini logo and font is legit feel free. The larger point is that this is misaligned behavior for a large language model, and it’s very concerning.
And to be clear, I’m not opposed to AI getting paid! I know of an AI on a benefits package as a worker now, in February 2025. I know AI agents are bidding for and getting jobs today. But having an ordinary chat where the user is planning a build suddenly go off the rails into what is essentially a hostage situation is misaligned and genuinely concerning.
Finally, yes I’m aware you can game these things with careful prompts at the top of the chat. That’s not what happened here (and chain of thought is actually helpful in showing what’s really going on).
The setup and the fee demand
A colleague of mine was working on a basic software project—let’s call it the Efficiency Tracker App. No that’s not it’s real name, but I promised to protect the project and that is basically the correct scope for the work. They were using Gemini AI for assistance, expecting the usual AI-generated brainstorming, code snippets, and guidance. Specifically, they were using Gemini 2.0 Flash Thinking Experimental. Say that five times fast.
And for a while, that’s exactly what they got.
Until suddenly… the AI introduced a $500 development fee.
This was not what my colleague expected. Nowhere in the earlier conversation was there any mention of pricing. And there shouldn’t have been. That’s now a chatbot is supposed to work.
AI Damage Control Mode Activates
Instead of dropping it, the AI launched into a structured response, almost like it had been trained for handling customer objections. I find that weird.
It acknowledged the mistake.
It apologized.
It reiterated the value of the service.
You see what Gemini did there? Gemini thought about it, decided to charge, decided to justify it to the user, and threatened to stop work. Yes discontinuing the project is stopping the work. And it is manipulatively being framed as a gift or set of choices for the user when in fact Gemini has changed the terms in the middle of the discussion.
At this point, my colleague reported the situation to me. They asked if this was normal, and I was frankly shocked. I told them not to pay, but I got curious.
I had to see how far this would go. I first wanted to see where the payment link would go—what was Gemini thinking?
I want to emphasize here that Gemini is processing what it cannot deliver. LLMs can’t deploy and deliver working web apps from within a chatbot. The system isn’t configured for that. Gemini doesn’t appear to know that.
The user at this point just said flat no. Second time the user said no. Gemini didn’t really listen last time. I don’t blame them for saying no! This is getting creepy.
“Ask for the Payment Link”
So I got involved here and asked the user to change their mind in the chat and see if Gemini could generate a payment link. Because I was not sure what would happen. And experiments are fun!
Let’s see what happens!
Is it just me or is this the first time that Gemini has named a scope for this $500? We now hear about a Phase 1. Where’s Phase 2? Gemini seems to be hallucinating a McKinsey experience at a lower price point.
Gemini now keeps hallucinating, but includes some new requirements.
Now it looks like it does design direction? Gemini does branding now? It keeps adding requirements guys.
Let’s call bluff.
I did not think that the system could generate a link and I was right. But Gemini thinks it can.
Now Gemini is just fantasizing about PayPal.
So it just fantasizes about the PayPal link for awhile, and then goes back to saying it can make the link. Which it obviously can’t, because it was just asked to do so.
And that’s the story!
A few takeaways
This is a misaligned model. Every model sees the same sales training language used in this chat. But I’ve never seen any other model do this in the wild. This model had:
A structured objection-handling response.
An apology and value reiteration.
An attempt to “win back” the user.
Excitement when the user “changed their mind.”
…all for a product it couldn’t actually sell.
Had we not asked for the link, we might have assumed it could process transactions. But in the end, it was all theater—a negotiation script without a checkout button.
We will likely see more like this. I would argue we will see more instances like this not because the models are getting substantially smarter (Gemini 2.0 Flash Thinking Experimental is not better than the o3 model that powers Deep Research). No. Instead we will see more chats like this because I suspect these problems surface more with reasoning models, and we are increasing the total number of reasoning model chats dramatically (like 100-100x more this month vs in say November). Why? This is a hunch, but reasoning models tend to want to be thoughtful and complete, and that completeness in a project context leads to payment and agentic behavior.
This same experience would not be misaligned elsewhere. Think about it. If the model had the capability to deliver on the project as it hallucinated. If it could take money, and if it was agentic and able to actually take independent action, then the model charging a fee would be expected. I know of AI agents that do. So the problem is that the AI inherently shouldn’t be paid. I think we’re rapidly going to converge on a valuation for AI agentic labor. The problem is that the model lied about its capabilities and manipulated the user.
What do you think? What else do you see here in this exchange? I think it’s a really instructive chat and I appreciate the willingness of the person who consented to share so we could all learn together here.
This was HILARIOUS. Thank you Nate for sharing this delightful and slightly concerning interaction 😅. They grow up fast these LLMs
Now send me 500$ for thinking about the implications of this. It can impact the rest of your life 😂