Advertisement

Ask a Data Ethicist: How Is Price Optimization a Data Ethics Issue?

By on
Read more about author Katrina Ingram.

Companies tend to want to optimize their pricing, while consumers generally try to find the best deal possible. Traditionally, we’ve navigated the tension in ways that respect ethical norms and, of course, laws such as anti-gouging and anti-price discrimination where applicable. At a moment when tariffs are set to raise prices and the cost of buying a dozen eggs in America feels akin to making a major purchase decision, ensuring fairness in pricing feels more relevant than ever, thus:

How is price optimization a data ethics issue?

Data Analytics and Price Optimization

This question about pricing came to mind as I was listening to a presentation on data analytics from a data scientist in the retail sector. They were describing various price optimization strategies they had used while employed with a grocery store chain. They would take into account a number of data points, everything from competitive prices to seasonality to market trends, and then land on the price of the item that would wind up on the shelf. Price optimization is pretty standard practice. Yet, with the ability to access more data and apply machine learning, we can do new things to optimize prices in ways that start to challenge ethical norms and raise legal questions.

The Ethics of Pricing

Let’s imagine that a major natural disaster has just taken place. People are in need of food and water. A local store hikes its prices for these limited but necessary goods to an exorbitant level – a can of soup is $50! The retailer is taking advantage of high demand and limited supply at this time of need. This is known as price gouging. Some places have laws against it because many people feel it’s deeply unethical to take advantage of others in a time of need, prioritizing self-serving economic outcomes over human well-being. This is a scenario explored by Michael Sandel in his book, “Justice: What’s the Right Thing to Do?” Sandel’s argument is that our outrage against these “vultures” is a moral clue, pointing towards anger against injustice.

Now, imagine another scenario … you’re hunting for your favorite brand of peanut butter at a grocery store. As you approach, the digitally displayed price tag for the item changes. Using a combination of facial recognition, your loyalty profile, smartphone data, and other consumer data points to dynamically price the item, you are offered the peanut butter at a cost of $10.99. However, your friend who was just shopping in the same aisle minutes prior got it for $3.99. Their data resulted in a much better deal! This scenario is fictional, but it highlights the phenomenon of dynamic pricing as it turns into surveillance pricing. It also feels unfair. You might wonder – why am I being charged more for the same product at the same store bought at the same time? It’s not like your friend had a coupon – at least that would be understandable. Instead, the price difference is based on an opaque decision made using personal data and algorithms. It feels arbitrary to you, but it might be based on a prediction about your willingness to pay. Conversely, if you’re the friend in this scenario, you might be thinking what a great deal – the store is practically giving me the product.

While I suspect many people would likely agree the above scenario doesn’t seem fair, this is where some pricing strategies are going, especially in the realm of online shopping. It’s now possible to take into account a range of data that could be used to determine if you get higher or lower prices. In 2024, Wendy’s announced it would introduce a (much less sophisticated) version of this scenario, which it dubbed “surge pricing,” only to reconsider it when met with public backlash. One marketing professor called the situation a “case study in stakeholder conflict.”

Is this a new type of algorithmically enabled price discrimination (or as McKinsey calls it “digital pricing transformation”) that results in a type of gouging? It’s not clear.

From Dynamic Pricing to Surveillance Pricing

Dynamic pricing is the ability to price an item in real time taking into account supply and demand. The result is that two people might pay very different prices for the same product or service. It’s not necessarily a new phenomenon. For some industries – like hotels or airline tickets – the ability to incorporate supply and demand variables in real time is widely accepted and deemed fair. People understand why a last-minute airline booking would presumably cost more because there is less availability. 

What’s new is that fine-grained personal consumer data, including inferences, can become part of the pricing decision mix. The FTC has dubbed this “surveillance pricing” and recently released some of the findings from a study into this practice. Specifically, they are looking into data supplied by credit card companies, data brokers, and management consulting firms – intermediaries – who are often involved in algorithmically tweaking prices.

“The FTC’s 6(b) study focuses on intermediary firms, which are the middlemen hired by retailers that can algorithmically tweak and target their prices. Instead of a price or promotion being a static feature of a product, the same product could have a different price or promotion based on a variety of inputs—including consumer-related data and their behaviors and preferences, the location, time, and channels by which a consumer buys the product, according to the perspective.” (FTC, 2025)

In Canada, one legal scholar has framed the issue around the “reasonableness” of using personal data for algorithmic personalized pricing (APP), which optimizes for the customers’ willingness to pay. Think about the peanut butter example – it’s your favorite brand. You are more likely to pay more than your friend who is just “meh” about that particular product. Chapdelaine arrives at this conclusion:

“When setting a price, using the personal information of a potential customer to assess their maximum willingness to pay goes against the core principles of valid consent and reasonable purpose under personal data protection law.” (Chapdelaine, 2024)

This is an interesting argument because it goes outside traditional consumer protection laws and instead focuses on privacy laws and the reasonableness of using the information for a purpose, which if made transparent, most people would find objectionable. It made me wonder if this might also be offside with the GPDR (Article 22), which covers individual automated decision-making using personal data. 

Isn’t This Just Better Price Optimization?

Yes – it is!  However, when thinking about issues of data ethics, we need to ask if the application of the data is fair in this context. Should the price of an item be driven by data about the product and the market, or about who is purchasing it? Historically it’s been more about the former, but increasingly all kinds of personal data are being used for pricing decisions.

Even within the subset of a consumer’s data, we might have greater nuance in terms of which specific details are permissible. Should companies be allowed to charge a person more for a product or service based on gender, race, age, ability, or other protected categories? That feels discriminatory and may be unconstitutional. What about income level, ZIP code or home ownership? What about the correlations that often reveal protected categories? How about making inferences to determine if someone is a new parent or recently divorced or feeling depressed? The categorizations and classifications enabled by the use of data and machine learning, at scale, and often in real time are challenging the notions of fair pricing practices like algorithmically derived personalized pricing. We need to decide – where is the line?

Further Reading

Are you ready for personalized pricing 

The emergent dangers of surveillance pricing 

EU Parliament Personalised Pricing Study

Send Me Your Questions!

I would love to hear about your data dilemmas or AI ethics questions and quandaries. You can send me a note at hello@ethicallyalignedai.com or connect with me on LinkedIn. I will keep all inquiries confidential and remove any potentially sensitive information – so please feel free to keep things high level and anonymous as well. 

This column is not legal advice. The information provided is strictly for educational purposes. AI and data regulation is an evolving area and anyone with specific questions should seek advice from a legal professional.