Insight
January 16, 2024

What’s Next for AI? Six Areas to Watch in 2024

GenAI emerged as a transformative force for businesses in 2023. Here's how AI and the legal landscape could evolve this year

Generative AI (GenAI) surged to the forefront of corporate agendas and public policy debates last year, promising to boost productivity and innovation. What’s in store for AI in 2024?

Companies will increasingly turn to new AI tools, offering enormous potential economic benefits across the world. GenAI could add as much as $4.4 trillion in annual value to the global economy, which — to put that in perspective — would exceed the size of the UK’s economy, according to consulting firm McKinsey. 

Novel AI tools that can identify new materials and compounds could dramatically accelerate scientific discoveries, such as drug development and the creation of materials for use in batteries, solar cells, and other clean technologies. 

Continued innovation will likely help sustain a venture-capital gold rush into AI startups this year. The rapid evolution of AI will also lead to more legal cases, including those concerning intellectual-property rights of AI-generated content. Courts will begin to weigh in on some of these issues in 2024, but legal clarity will take years to establish. In the meantime, businesses should prepare for the expected passage of the EU AI Act, a set of AI rules that is global in scope. 

Without a crystal ball, we cannot predict exactly how AI developments will unfold. But here is what we expect might play out in six key areas of AI in 2024:

1. A surge in new GenAI tools will carry licensing challenges.

Many companies began using GenAI chatbots over the past year to capitalize on their human-like ability to answer questions, write code, and review documents. This year, expect an influx of new GenAI tools, some of which build on preexisting technologies. Imagine all the different ways a GenAI chatbot could be incorporated into search engines, browser extensions, and word processors.

Businesses looking to acquire these new tools will need to navigate some risks that can arise from AI licensing deals. AI tools are often built on third-party data, which can be associated with intellectual-property and privacy risks. 

The bottom line is that companies seeking to integrate these AI products need to understand what data these AI systems use and how they use it.

2. Pressure on patent laws will mount.

New AI tools are making discoveries beneficial to the life sciences industries, from desirable protein binding target sites to designs for novel chemical compounds and other therapeutic materials. These sorts of technologies could revolutionize drug development, but current judicial precedent might hold back some of the potential gains. 
 
Patent law currently does not allow patenting of subject matter for which AI is the sole inventor. In Thaler v. Vidal, a federal appeals court determined that an AI platform could not be listed as the sole inventor because AI was not a “natural person,” or human. 
 
Some drug makers might not use AI to invent new products unless the law changes and clearly allows them to patent their AI-generated work. Patents are particularly crucial for drug companies because drugs are a reverse-engineerable technology. If company A does not have a patent on its drug, company B could reverse engineer — or replicate — company A’s drug once it enters the market. 
 
Companies in other industries, such as software, are less likely to deal with concerns of reverse engineering but have always faced a difficult choice between seeking patent protection and maintaining their innovations as trade secrets. The current patent laws might tip the balance and encourage more software companies to keep their AI-generated inventions as trade secrets rather than patenting them.

3. Copyright cases will proliferate.

Courts will begin to consider cases that could result in new precedents for how copyright law applies to AI-generated work and to AI tools. Legal clarity will develop over several years, with cases this year marking a preliminary step in the crystallization process. 
 
Courts will start to hash out to what extent individuals and companies can copyright AI-generated work. Similar to patent law, the courts have confirmed that creative subject matter created solely by AI cannot be copyrighted; however, works that include AI-generated material appear to be copyrightable in cases in which at least some of the creativity can be attributed to humans. That could include situations in which an artist modified material initially generated by AI or cases in which a musician used AI as a tool to enhance their own work. 
 
Courts will also analyze the degree to which AI tools such as GenAI chatbots can use third-party intellectual property. Copyrighted material can be used without the copyright owner’s permission in certain circumstances under a legal provision called “fair use.” GenAI chatbots are often trained on third-party information from websites, books, and newspapers, raising questions about the boundaries of fair use.

4. Regulations will begin to take shape.

EU lawmakers are on the verge of passing the EU AI Act, a set of rules governing the creation and deployment of AI systems that will have an impact across the globe. Although the US is not likely to pass its own version of the EU AI Act, many US companies, including those that do business in the EU, will be subject to the EU’s regulation. And in the US, legislative, regulatory, and enforcement efforts will impact the development and deployment of AI.

Over the past year, policymakers debated how to update the draft EU AI Act to address the sudden emergence of GenAI systems, delaying the regulatory process and highlighting the rapid evolution of AI. The regulation of foundation models and GenAI systems remains one of the main stumbling blocks to passing the EU’s regulation.

The EU AI Act is written in broad terms intended to offer flexibility as AI evolves. Nevertheless, regulators might struggle more generally to keep up with the pace of technological change associated with AI. Consider the possibilities of quantum-backed AI, which is on the horizon. 

Emerging technologies often develop faster than the law, with data privacy offering a recent example. The surge in big data generated through the online economy in the 2010s pushed the boundaries of preexisting privacy laws and heralded the emergence of the GDPR (the EU General Data Protection Regulation) as a de facto global standard.

Companies will need to begin preparing for global AI regulatory and enforcement initiatives. As preliminary steps, businesses should identify what AI systems they use and understand the level of risk their systems pose according to the EU AI Act’s risk tiers. They should also establish robust AI governance within their operations to institutionalize AI ethical principles in relation to the development and deployment of AI systems.

5. Data privacy and bias challenges will intensify.

As companies build and deploy new AI systems, privacy and bias challenges will become even more prominent issues. Developers of AI systems, in particular, will be focused on how to create unbiased, secure AI tools.

Privacy concerns can arise from a GenAI model accessing personal information through a prompt and using such information without consent. Data bias can stem from cases in which AI systems train on data that reflects existing social biases.

Last year, several federal agencies including the Consumer Financial Protection Bureau, the Department of Justice, and the Federal Trade Commission issued a joint statement indicating their concerns about AI bias, discrimination, and privacy risks. The FTC has intensified its regulation and enforcement in the context of AI issues.

Companies can take practical steps to mitigate these risks. They can, for instance, tighten controls of data flows to avoid the leaking of personal information into AI models. Where personal information is used to train AI models, the data should be carefully interrogated to ensure that it is actually necessary and to ensure the quality and diversity of the datasets.

6. Investment in AI startups will remain hot.

Investors poured money into AI startups last year, bucking a broader slowdown in the tech industry. Global venture-capital investment in GenAI startups totaled $23.2 billion from January through mid-October 2023, up about 250% from 2022’s full-year total, according to PitchBook.

The gold rush into AI startups is poised to continue this year. The reason boils down to the fact that investors will invest in innovation, and AI is set to become an even stronger and more versatile tool. 

 

This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee a similar outcome.