If you’re in New York, open your favorite delivery app and try to order a coffee right now. Somewhere near the price, in the same font size your brain is trained to ignore, you’ll see a new label: “This price was set by an algorithm using your personal data”.
That’s not a hypothetical. That’s New York’s Algorithmic Pricing Disclosure Act, which took effect in November 2025. It’s A first law of its kind in the US. Businesses that skip it face up to $1,000 per violation, and Attorney General Letitia James has already encouraged New Yorkers to report offenders.
But here’s the thing about mandatory labels: they tell you that something is happening. They don’t tell you how. And the “how” is where it gets interesting, because what’s actually being built behind that label is an entirely new transactional architecture. The web is shifting from static price catalogs to real-time offer engines, where the number you see on your screen is the output of a system that knows more about your behavior than you probably realize.
You already know about surge pricing. This is different.
Uber charges more when it’s raining. Airlines charge more when the flight’s almost full. Supply and demand, real time, same price for everyone looking at the same product at the same moment. Unfortunately this is something we are all used to by now.
The newer architecture is personalized pricing, sometimes called “surveillance pricing” if you’re a regulator. Here, the price responds to you. Your device, your browsing history, your location, your pattern of behavior on the page.
The FTC launched a formal investigation into this in July 2024, issuing orders to eight intermediary companies, including Mastercard, Revionics, Bloomreach, and McKinsey. The preliminary findings, released in January 2025, confirmed what privacy researchers had been warning about: a wide range of personal data, including precise location, demographics, browsing history, and mouse movements, is being used to set individualized consumer prices.
The data ingestion layer goes deeper than most people expect. The FTC’s issue spotlight on surveillance pricing details how browser fingerprinting captures your screen resolution, installed fonts, browser extensions, and battery information. Your phone at 8% while you’re trying to book a ride home? That’s a collectible signal about urgency. Pair that with session behavior like time-on-page, scroll speed, and whether you’re comparison-shopping across tabs, and you get a real-time behavioral profile. The system is constantly asking one question: is this person an impulsive buyer or a price-sensitive comparer?
The grocery aisle becomes a lab
In December 2025, an investigation by Consumer Reports and Groundwork Collaborative pulled back the curtain on exactly how this plays out. They recruited 437 volunteers across four cities to shop on Instacart for identical baskets of groceries from the same stores, at the same time. About three-quarters of the products showed different prices to different shoppers. At a Seattle Safeway, a box of Wheat Thins varied by as much as 23%.
Instacart’s defense was revealing. They insisted this was A/B testing, not dynamic pricing. The prices weren’t changing in real time based on demand. They were randomized experiments designed to help retailers understand consumer sensitivity.
Which raises a genuinely interesting question from a product architecture standpoint: where does “testing a price” end and “personalizing a price” begin? If I show you a higher price and you pay it, and I show someone else a lower price and they pay that, and I record both outcomes to optimize future pricing, am I testing or am I personalizing? The technical infrastructure is identical. The intent is the only thing that differs. And intent is hard to audit.
The FTC apparently thought so too. Within days of the report, Reuters reported the commission was probing Instacart over its Eversight AI pricing tool. Instacart ended the program immediately.
And this logic isn’t staying online. Walmart plans to bring electronic shelf labels to 2,300 stores by 2026, and Kroger has been rolling them out since late 2025. These are small digital screens that replace paper price tags and connect to the same cloud-based pricing backends that power e-commerce. The web monetization stack is walking into the physical store. Senators have already raised concerns about whether Kroger’s shelf displays, built in partnership with Microsoft, could pair cameras with AI to detect a shopper’s age and gender and adjust pricing accordingly. Kroger denied it. The technical capability exists regardless.
The KPI shifts from volume to extraction
From a monetization standpoint, the strategic implications here are hard to ignore.
The traditional pricing KPI was volume: how many units can we move at this price point?
The emerging KPI is extraction: what is the maximum amount this specific person will pay?
Economists call the gap between what you’d be willing to pay and what you actually pay consumer surplus. It’s the feeling you get when something costs less than you expected. The algorithmic goal, stated plainly, is to drive that surplus toward zero, to price an item at exactly the dollar amount where you’ll still click “buy” but won’t feel like you got a deal.
Uber made this architecture visible years ago. The same algorithmic system that finds the highest fare a rider will accept also finds the lowest payout a driver will take. Two sides of the same optimization. The rider’s ceiling and the driver’s floor, calibrated in real time, with the platform capturing the spread.
That two-sided squeeze is the template. And it’s moving into groceries, retail, SaaS, and every other category where the pricing backend can be automated.
Can a label fix a thousand-variable equation?
So back to that mandatory disclosure: “This price was set by an algorithm using your personal data”. As policy, it’s a meaningful first step. It forced the conversation into the open, and other states are following with their own proposals, over 50 bills across 24 state legislatures in 2025 alone.
But as a tool for consumer understanding, it probably falls short. If an algorithm ingests a thousand data points to produce a single price, can a label really give you enough information to evaluate whether that price is fair? You’d need to understand which variables it weighted, how it weighted them, and what price it would have shown someone with a different profile.
Every click is a bid. Every scroll is a signal. Every checkout is a negotiation you didn’t know you were having. The web is no longer a store with prices on the shelf. It’s a high-speed negotiation engine, running millions of micro-transactions against behavioral profiles that most people will never see.
The label tells you the auction is happening. It doesn’t tell you whether you won.