GenAI is proving to be an incredibly powerful tool. As we discover its capabilities, we must also anticipate its risks and challenges.
A recent court ruling illustrates the growing debate around AI-generated artwork and whether it should be eligible for copyright protection.
Governments and companies are increasingly focused on developing data governance systems that will enable AI progress while minimizing the potential for negative impacts.
Expected to pass this year, the EU Artificial Intelligence Act could be more expansive in its extraterritorial reach and strict in the penalties it imposes than even GDPR.
The answer seems to be yes — but only when ‘authorship’ of the work can be attributed to a human.
Most systems do not protect sensitive information used in prompts, and users bear most of the risk of using generative AI systems and outputs.
Users probably own the outputs from generative AI tools but probably can’t protect them as IP – and they could be held liable for using outputs that infringe IP.
Companies should ensure they do not infringe on the copyrights of underlying works that are used to train generative AI tools.
Companies should develop detailed policies for how their organizations may use generative AI systems.
Use existing approaches to get started, but prepare for coming regulations and complexities in areas such as data on minors, biometrics, and facial recognition.
The ability to protect confidential information and avoid adverse impacts from using AI in HR processes hinges on an understanding of employment law.
To get a sense of how generative AI could affect legal professionals, we asked a well-known GenAI model to do three everyday tasks that are typically done by junior lawyers.