CIO

AI and price fixing: Collusion by machines

Troy Pilkington, Sarah Keene, Liz Blythe, Zoe Sims and Chris Brunt of Russell McVeagh write about a critical facet of using AI in business.

As firms increasingly move towards using AI-based algorithms to set their prices, algorithms could make it easier for competitors to achieve and sustain collusion without any human interaction.


The New Zealand Commerce Act 1986 (Commerce Act) prohibits anti-competitive cartel agreements between competitors, such as price fixing agreements. But what happens when businesses use AI technologies that use algorithms that have not been trained to notice, and avoid, anti-competitive cartel behaviour? There is a risk that, as businesses increasingly move towards using AI-based algorithms to set their prices, algorithms could make it easier for competitors to achieve and sustain collusion without any formal agreement or human interaction.

Problematic conduct involving AI-based pricing algorithms could include two or more humans agreeing to fix prices, but rather than agreeing an explicit price, they agree to implement a joint pricing algorithm that coordinates prices on their behalf (i.e. human-to-human collusion on the selection of the algorithm).

 

Companies have already been prosecuted for such conduct in Europe, including in respect of an agreement to reconfigure automated pricing software so as not to undercut each other; and for agreeing to implement an algorithm to allocate customers between each other. This does not differ substantially from traditional price fixing - it is still a cartel agreement between two people and is prohibited conduct under the Commerce Act. The only difference is how the agreement is implemented (i.e. using a common AI algorithm).

Another example of problematic conduct could arise where a business outsources its pricing function to a third party. If multiple competitors engage the same third party agent to set their prices using an identical algorithm (with the knowledge that their competitors are also engaging the same price-setting agent) there is a risk that this would amount to 'hub and spoke' collusion. 

For example, travel agents that sold on an online platform in Lithuania were prosecuted when the platform's administrator unilaterally imposed technical restrictions on the ability of the independent travel agents to offer packaged tours at a discount exceeding 3 per cent. The travel agents who knew about the restriction, and did not take any steps to oppose it, were fined for engaging in cartel conduct. Notwithstanding the lack of direct coordination between the competitors, the New Zealand Commerce Commission (NZCC) or courts could similarly form the view that this arrangement has the purpose or effect of controlling prices between the parties, using a third-party conduit (i.e. AI) – which is prohibited conduct under the Commerce Act.

It is likely that the NZCC's response to the examples above would be quite straightforward – that the  agreements on using the same AI algorithm or appointing a common third party pricing agent would be treated by the NZCC as cartel conduct under its existing cartel conduct paradigm.

A slightly more grey area arises where, with no human involvement or instruction, a price-setting AI algorithm teaches itself to coordinate with competitors – otherwise referred to as tacit algorithmic collusion. While it is unclear the extent to which current AI technology allows for this tacit coordination, a 2018 University of Bologna study found that if two competitors both employ pricing algorithms, and each gives its software total autonomy to set prices, those two algorithms would reach collusive, price-fixing arrangements more often than not.

Given that tacit algorithmic collusion does not involve any element of human agency and is oftenconducted by computers that do not have explicable decision-making processes, it is not clear how the NZCC would regulate and enforce this type of cartel-like behaviour. This leads to questions of whether regulators and policy-makers should revisit the concepts of "agreement" and "collusion" for competition law purposes and whether there is a need to specifically regulate pricing algorithms.

European Commissioner Margrethe Vestager has indicated that the European Commission (EC) will likely take a strict liability approach to enforcement against cartel conduct by AI. Under this approach, a business that utilises price-setting AI will be liable for any software-initiated price-fixing behaviour, even when humans do not initiate (or even understand) this behaviour. Margrethe Vestager refers to this as "compliance by design":

[Pricing] algorithms need to be built in a way that doesn’t allow them to collude. What businesses need to know is that when they decide to use an automated system, they will be held responsible for what it does. So they had better know how that system works.

While some overseas competition law authorities have expressed doubt in relation to the effectiveness of this approach, and the NZCC is yet to bring any enforcement action in this space, it is possible that, when it does, it will also seek to adopt this strict liability enforcement approach in New Zealand.

As there have not been any cases internationally yet that allege "collusion by machines", the way that regulators, the courts, and policy-makers will think about these issues remains to be seen. However, it seems to be only a matter of time before the first cases are brought, and the conduct of machines faces the same scrutiny as the rest of us.

This column was written by Competition Partner Troy Pilkington, Competition Partner Sarah Keene, Technology Partner Liz Blythe, Solicitor Zoe Sims and Solicitor Chris Brunt of law firm Russell McVeagh.