The Evolving Impact of AI Use on Competition: Takeaways from the 2023 ABA Fall Forum

ABA Antitrust Law Section Newsletter
12.04.2023

On November 9, 2023, the American Bar Association Antitrust Law Section held its annual Fall Forum focused on the theme “Can Antitrust and Consumer Protection Keep Up with Artificial Intelligence (AI)?” This exciting program brought together computer scientists performing cutting-edge AI research, policymakers considering proper legislation for regulating these new technologies, and practitioners navigating the implications of this changing legal landscape in the scope of antitrust, privacy, and consumer protection. Panelists also shared thoughts on President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued on October 30th. Three key themes resonated throughout the day’s talks:

1. AI models are constructed through an iterative process for language representation, not knowledge representation.

The introductory panel laid down the “nuts and bolts”1 of AI technology. Many of the AI tools that are popular today, such as ChatGPT, can be categorized as Large Language Models (“LLMs”). LLMs use large amounts of data—for example, all text on the open Internet—to train a computer to predict what is the most likely next word or phrase. This is also known as “natural language processing.”

The training of an LLM is an iterative process—the algorithm “takes a guess” at the right answer, receives feedback on how “correct” this response was, and makes appropriate adjustments to optimize predictions within its training dataset. Through this trial-and-error process, the LLM “learns” patterns within its training dataset and can form “language representations,” or content generation that sounds the most natural. For example, to answer a question, the LLM identifies the word that is most likely to come after the question mark, then the word after that, and the word after that, until the “most likely” word is the end of the response.

While the content generated by these LLMs may sound natural, the LLM does not form “knowledge representation.” It does not “understand” what is in its training dataset or “know” whether the information stored in the data is accurate. An LLM may generate a response to the question that is most similar to the patterns observed in its training dataset, but it does not evaluate whether its changes in word choice affect the content’s meaning as whole. As an example, the panelists discussed that while a programmer could explicitly ask an LLM to not generate that someone is “a bad lawyer,” that wouldn’t stop the LLM for generating that they are “an ineffective litigator.

AI tools are thus limited by their training dataset. If a training dataset is limited in its scope of information or exhibits unnatural patterns, it may not be effective in generating specialized content for “the real world.” The panel discussion on the topic “Algorithmic Bias: Competition, Consumer Protection, and Privacy”2 provided the example of an image-processing AI used to identify cancerous tumors. The AI tool failed at identifying cancerous tumors outside of its training set, because its pattern recognition had determined that images with rulers—used to measure the size of a cancer—in its training dataset were the ones with cancer. The panelists brought up this example as a cautionary tale: while popular culture may think of AI as “advanced” or “sophisticated,” if it is trained on misrepresentative or inaccurate data, this can lead to a “dumb” or “biased” AI.

Throughout the conference, speakers stressed the importance of understanding what data goes into the AI models being used and to fact-check, as clients will likely be held responsible for actions taken based on AI-generated recommendations. The panel on “Algorithmic Bias” also described how algorithms could perpetuate discrimination, in, for example, organ donation. An AI tool may be trained on the previous decisions of thousands of doctors, who may have had their own biases in determining which patients most deserved an organ transplant. The AI tool may then pick up on these biases as indicative of the “correct” choice and perpetuate these decisions. Unlike an individual doctor, who may make different decisions each time based on individual patient differences, the AI tool may have an even greater discriminatory effect by consistently outputting the same biased results. While the algorithm may not have the intent to discriminate, and the hospitals may feel there is a diffusion of responsibility when AI tools are used, AI can generate illegal results that should not be used.

As repeated several times throughout the day: while AI tools provide recommendations, it is still the human making decisions. One panelist pointed out that humans may find it easy to accept an AI’s recommendation without question and pass on this decision-making. However, as discussed during the “Agreements and Parallel Conduct (Robots Did It: Collusion, Coordination, or Neither)”3 panel, litigators may question whether companies could ignore how their AI models produce results and argue that the choice to accept the AI’s recommendation leaves clients liable for the effects of those recommendations. Across panels, policymakers and law enforcement noted that there is no AI exception in the current antitrust laws. Regardless of what AI generates, by incorporating these tools into our work, humans will be liable for any errors or poor decisions that are made.

Given the possibility for inaccurate content, LLMs may find greater use in preliminary review or document generation that can then be vetted by a human litigator. Panelists of the “Using AI in Litigation, Audits, and Investigation”4 session discussed how AI could be used to “widen the discovery funnel,” allowing for a quick first-pass to identify the most likely important documents for litigators. However, human litigators would still need to review any AI-generated output for accuracy. Other potential use cases were raised that could be lower risk: perhaps AI could be used for content creation that is not fact-based, such as drafting language for a will, but that litigators should be prudent about submitting anything AI-generated to the Courts.

2. LLMs depend on the sharing of large amounts of information, which panelists caution could lead to accusations of collusive behavior.

As part of the iterative process, LLMs require a large amount of training data. The more training data that an LLM can use, the more sophisticated its predictions can become. From a computer scientist’s perspective, the sharing of large amounts of data allowed for great technological advancement. One professor noted that one of the advantages of a LLM like ChatGPT was that it trained using the full open Internet, allowing it to incorporate the freely shared information of millions of human users.

However, from an antitrust perspective, such information sharing—particularly among competitors—may instead risk allegations of anti-competitive behavior. The panel discussion on the topic “Agreements and Parallel Conduct” grappled with whether competitors using the same or similar pricing algorithms and contributing their own pricing data could be distinguished from an explicit collusive agreement, and how these issues may play out in the Courts. They drew parallels to an interesting case of public information on gas prices in Germany, where after gas stations were required to publicly post their pricing information, prices increased. The panel then discussed: could the same happen with the use of pricing algorithms?

The nuances of how the specific pricing algorithms are structured would likely affect how collusion cases are handled in the Courts. The panel discussed a hypothetical case in which each company designs its own pricing algorithm, collects information on market prices, and supplements with its own historical pricing data. One litigator questioned how the use of the AI pricing algorithms in this hypothetical scenario differs from performing market research—just on a larger scale. Panelists discussed what arguments would be effective in the Courts to demonstrate collusion in the perceived absence of any communication or agreement between competitors and proof of private data sharing.

The panelists then adjusted their hypothetical scenario: while the human competitors may not have any communication with each other, would it be possible for the AI competitors to communicate with each other? The panelists discussed the possibility that if enough competitors all use AI over time, the separately designed algorithms could “learn” that they are competing against each other and adjust their pricing strategies accordingly—potentially picking up behavior that appears coordinated. If each company accepts their own AI’s recommended price, even without any communication with their competitors, this could potentially trigger an investigation into anti-competitive behavior.

Next, the panel discussed what might happen if, instead of designing its own pricing algorithm, each company hires a third-party to provide price recommendations. If the third-party uses data from multiple competitors to train its pricing algorithm, could the companies hiring them be liable for collusion—even if they did not have access to the training data? One litigator argued that a company’s ignorance on the data’s origin may not protect the market from coordinated pricing. Another echoed the earlier sentiment that just because an algorithm spits out a price does not mean that the firm needs to follow its recommendation. However, a panelist questioned why a firm would hire a third-party if it did not intend to follow the recommended steps and whether the Courts could link this to an intent for anti-competitive behavior. A stirring discussion ensued as the panel considered what factors would go into a firm’s decision to use a pricing algorithm, such as:

  • What did the humans using the AI tools know about what it was doing?
  • Did they question where the data came from, and would they still use it if they knew it included competitors’ pricing?
  • Was there an agreement among competitors not to share data but to share algorithms, which happened to provide similar recommendations?

To answer these questions, law enforcers argue that they would need to broaden the scope of discovery. While previously Plaintiffs or Agencies may have only requested sales data, they may now want to examine the programming and any data that could potentially feed into the pricing algorithm. Current antitrust laws have not established how AI tools and their training data would be discoverable and evaluated in the Court.

Furthermore, litigators on both sides would likely need to understand how the AI tools were used in the alleged conduct to adequately explain their case. The panel on the topic “Using AI in Litigation, Audits and Investigations” noted that future cases may require lawyers to wear multiple hats, supplementing their current legal knowledge with a baseline understanding of how their clients may be using AI tools. A lawyer who can identify potential antitrust issues in using these new technologies can better counsel their clients and avoid triggering investigations.

3. With court cases around the corner and unanswered questions, private sector and law enforcement need to act fast and be specific to provide actionable legislation.

It is yet to be seen how antitrust cases involving AI will play out. Both the technology and the legislation around AI is evolving, but keynote speaker and Senator Richard Blumenthal, D-Conn., noted that lawmakers and enforcers can’t wait to see how the technology develops before stepping in. Senator Blumenthal drew parallels to the slow legislation surrounding social media: by the time legislators passed regulations, the negative effects—especially on children—were already established. He urged Congress to draft and pass AI regulatory legislation, optimistically hoping that he’ll be able to unveil legislation later this year.

This sentiment was shared amongst several panelists working in public policy and industry. Panelists pointed towards the participation of big technology companies such as OpenAI, Meta, Microsoft, Google, and Amazon working with Congress and the White House to develop policies for responsible AI. Conference speakers noted a shared interest in ensuring that legislation promotes innovation and reduces harm, enabling companies to be more certain about where the line of non-compliance is. Well-timed with the day’s conference, on October 30, President Biden issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.

However, as the computer scientists pointed out, effective legislation needs to be specific and actionable to change how AI tools are constructed. For example, lawmakers may try to prevent the spread of misinformation by requiring that algorithms do not produce language that is similar to “bad facts.” However, these rules do not establish a guideline for what is “similar.” As the panel on the “Nuts and Bolts of AI Technology” pointed out, it can be difficult to place a rule on similarity in meaning when AI tools rely on language representation: a one-word difference (“a bad lawyer” vs. “a bad litigator”) or even a two-word difference (“a bad lawyer” vs. “an ineffective lawyer”) does not accomplish the intended goal. Similarly, the “Keeping AI Competitive: Assessing M&A Proposals”5 noted that as AI companies cross multiple industries, updates to merger guidelines will need to account for ones that don’t fit into simple horizontal or vertical boxes. Panelists discussed how to specify sizing guidelines for merging companies that may have a “small” market share in one dimension but are still “large” competitors in the AI field. Both the private sector and law enforcement will thus need to collaborate on establishing legislation that is specific enough to be implemented by AI-using companies.

The panel on the topic, ”Evolving Contours of Public Policy Around AI,”6 broadened the discussion of AI legislation to an international context. Panelists noted that other countries are ahead of the US in terms of their enforcement of AI tools and thus, may set the stage for how other countries approach AI regulation. Those interested in how future legislation may look in the US could start with the European Union AI Act, the world’s first comprehensive AI law and a likely precedence for other jurisdictions. As AI technology itself knows no bounds, the panelists discussed that international cooperation will be needed to ensure that laws are consistent and respected across borders. An open question remains on how to handle international parties that would not sign onto this agreement. Policymakers may be cautious on enforcing agreements if it could disadvantage the US’s technological innovation.

Until new legislation catches up with the evolving AI technology, US litigators may face an open world in answering these questions. However, several law enforcers expressed sentiments that although AI itself is a new technology, it still applies within the current antitrust framework. As panelists of the “Keeping AI Competitive” panel questioned, is AI different from any other industry in terms of antitrust enforcement? Prior technologies have led to a framework of laws and explanations that can still be used today. Time will tell how AI tools affect competition, discoverability, and how litigators themselves use AI will play out in the Courts.

  1. Speakers Professor Laura Edelson of Northeastern University; Deputy Chief Technologist for the Federal Trade Commission Alex Gaynor; and Moderator Alexander Okuliar of Morrison & Foerster LLP.
  2. Speakers Michael Atleson, Attorney of the Federal Trade Commission; Professor Michael Kearns of the University of Pennsylvania; Professor Anja Lambrecht of the London Business School; Tamra Moore, Corporate Counsel for Prudential Financial; and Moderator Maggie Goodlander, Deputy Assistant Attorney General for the US Department of Justice, Antitrust Division.
  3. Speakers Jennifer L. Giordano of Latham & Watkins LLP; Megan E. Jones of Hausfeld LLP; Ryan D. Tansey, Chief of the US Department of Justice Washington Criminal I Section, Antitrust Division; Christopher Young of Joseph Saveri Law Firm LLP; and Moderator Joshua P. Davis of Berger Montague.
  4. Speakers Gwendolyn J. Lindsay Cooley, Assistant Attorney General of the Wisconsin Department of Justice; Tom Matzen of Matzen Consulting Group; Pardeis Heidari, Global Competition Counsel for Dell Technologies; Michael Horoho of FTI Consulting; and Moderator Gabrielle Kohlmeier, Market General Counsel for Verizon.
  5. Speakers Markus Brazill, Counsel to the Assistant Attorney General for the US Department of Justice, Antitrust Division; Krisha Cerilli, Deputy Assistant Director for the Federal Trade Commission, Technology Enforcement Division; Daniel P. Culley of Cleary Gottlieb Steen & Hamilton, LLP; Professor Daniel Francis of the New York University School of Law; and Moderator Diana L. Moss of the Progressive Policy Institute.
  6. Speakers Jane Horvath of Gibson, Dunn & Crutcher LLP; Kate McInnis, Chief Democratic Counsel for the US House Judiciary, Subcommittee of the Administrative State, Regulatory Reform, and antitrust; Haidee Schwartz, Associate General Counsel of Competition for OpenAI; Christopher Lewis of Public Knowledge; and Moderator Adam S. Cella, Senior Special Counsel for the US House of Representatives Committee on the Judiciary.

Experts

Jump to Page

This website uses cookies to improve functionality and performance. By continuing to use this website, you agree to the use of cookies in accordance with our Privacy Policy.  If you are a California resident, read our California Information Practices.