Q1 2024 Gaming Industry Report Released,
View Here

Konvoy Ventures is a thesis driven venture capital firm focused on the video gaming industry. We invest in infrastructure technology, tools, and platforms.

Copyright Law and GenAI Content

What does copyright law and generative AI content mean for the gaming industry?

Copyright Law and GenAI Content

Output: can AI-generated content be copyrighted?

Last Friday, a US District Court for the District of Columbia ruled that AI-generated art cannot be protected under copyright law because it is not created by a human, a requirement for copyright protection under US law today (Bloomberg). Previously, in March of this year (2023), the US Copyright Office provided its first guidance on the issue. The Office pointed out the nuance that although art generated from AI programs may not have human involvement, the development of the programs themselves very well may have “sufficient human authorship”; as such, those works can be protected under copyright law (Federal Register). The crux of the issue, under current law, is human involvement in the creation process of such art.

In 2011, a monkey stumbled across a wildlife photographer's camera and took a selfie which was subsequently published by the photographer; PETA then brought a case against the photographer arguing that the monkey was the rightful owner of the photo, but the judge’s opinion noted that copyright law does not extend to animals. The US Copyright Office has since used this case as a core example of non-human work that cannot be copyrighted; other examples include artwork by an elephant and driftwood shaped by the ocean (NPR).

Human involvement is required in the creation process for work to be copyrighted, but the open question remains: how much involvement constitutes “sufficient human authorship”? We believe this will require more time and additional cases to help determine this for the industry.

Another issue for the outputs of generative AI models are all the potentially negative ways content could be used; from deepfakes leveraged for misinformation to impersonation and identity theft. With the speed of creation, open source tools, and readily available trained models, nefarious actors have robust tools to produce and distribute all sorts of fake content masquerading as truth.

As a society, we tend to put faith in images and video as truth because faking them has historically been difficult and labor intensive, yet that is changing with generative AI. In the future, we will need to rely more on reputable sources for information and potentially counterbalancing with AI content moderation solutions to detect generative AI content. Many tools and methods to detect generative content already exist today (GPTZero, Giant Language Model Test Room, “radioactive data”), but many researchers expect this will evolve into an arms race between generative models and systems to detect them (Wired).

Input: can AI-generative models train on copyrighted data?

The cases outlined in the previous section cover the output creations of AI models, but an arguably more fiercely debated topic is the inputs used to train AI models. Most AI companies scrape the internet for text and images to train their models. OpenAI’s ChatGPT was trained on 570GB of clean text data from sources across the internet (OpenAI). Stability AI’s Stable Diffusion v2 was trained on a subset of LAION-5B, a dataset of 5.85 billion image-text pairs (Hugging Face). Training of these models with large amounts of data are instrumental to their efficacy.

The vast majority of popular generative AI models today do not compensate the original producers of the content they leverage for training data, which many artists have lamented and protested against. In January of this year (2023), three artists filed a lawsuit against Stability AI, Midjourney, and DeviantArt claiming these companies have infringed on artist’s rights by training their models without the artist’s consent (The Verge).

There have also been a number of high-profile cases of traditional media and publishing companies suing generative AI companies for leveraging their content in training; for example, the New York Times is considering legal action against OpenAI (NPR) and Getty Images sued Stability AI earlier this year for copyright infringement (Getty Images). A major concern in these cases is whether the generative models are, in-effect, competing with the businesses producing the content they are trained on. For example, ChatGPT could answer questions based on original reporting from the New York Times (which reduces traffic to the original source of the news) and users may find generated images on Stable Difusion suitable over going directly to Getty Images.


Most music sampling and potentially derivative movie concepts are allowed because of fair use. Copyright law should promote the development of new ideas; protection of ideas (intellectual property) is important to provide a path to reward for creators, but it should not limit the development and commercialization of new ideas. We expect a majority of the copyright infringement cases against generative AI companies to fail in the courts and their activities will be protected primarily under the “transformative” character of the text and images produced.

What does this mean for the gaming industry?

Though a lot of 2D games can leverage existing generative AI content, a lot of games run in 3D environments. According to a 2022 survey done by SlashData, “47% of developers use 3D game engines; while 36% use 2D game engines” and that gap is continuing to widen in favor of 3D development (SlashData). Though a lot of mobile development is done in 2D, the majority of console and PC games are 3D.

3D game environments require much more complex 3D objects than the comparably simple 2D images that most generative AI models can produce today. 3D game asset objects (avatars, weapons, tools, map items, trees, etc) must have specified meshes, materials, textures, and rigging for animation. Though 2D images could be leveraged for textures of 3D models, it is an incomplete solution. We know of a number of startup companies attempting to work on the problem of 3D asset generation but the main issue is that the training data is not available.

The proliferation of generative image models was made possible because 2D images are readily available on the web and can be easily accessed by crawlers. Conversely, 3D models are not widely used on the web yet and there are only a few open source 3D content datasets with mostly sub 100k objects (3DVF, ShapeNet, OmniObject3D for example, though ObjaVerse-XL was recently released with 10m objects). The largest 3D content stores (often in the millions or billions of unique 3D assets) are both unevenly distributed and owned by the world’s largest gaming companies.

We expect many AAA studios and publishers with significantly large 3D datasets will be very protective of their 3D asset libraries but will likely leverage their 3D content to train and develop proprietary models for 3D content generation (in-house). This protectionist approach will also help them ensure avoidance of potential copyright infringement issues.

Takeaway: We expect more and increasingly intense legal battles surrounding the question of copyright infringement by generative AI companies in the years to come. These models, and their use by the public as well as in private organizations, will have a significant impact on existing media publisher incumbents; but we also see a strong “fair use” argument that generated content tends to be “transformatively” different from the underlying work these models are trained on. Within the gaming sector, AAA publishers have a unique advantage as the sole holders of troves of 3D content that will be necessary to train performative 3D generative content models. The winners will be the publishers that can adapt quickly to leverage this internal treasure trove of content.

Copyright Law and GenAI Content

What does copyright law and generative AI content mean for the gaming industry?

Welcome to Game Changers, the podcast that takes you beyond the games and into the heart of the gaming industry's future. Brought to you by Konvoy, a Denver-based venture capital firm investing in the platforms and technologies at the frontier of gaming. This podcast is your backstage pass to the pioneers, innovators, and visionaries who are redefining how we play and experience these virtual worlds.

In each episode, your hosts—Josh Chapman, Jason Chapman, and Jackson Vaughan, the founders of Konvoy — invite you to join them for candid and open conversations with the industry's most influential leaders. These guests are the “Game Changers”, the masterminds behind the scenes who've built remarkable enterprises and continue to push the boundaries of what's possible for our industry.

Whether you're a gamer, a tech enthusiast, or a startup aficionado, the Game Changers podcast offers valuable insights, inspiring stories, and exclusive access to the minds shaping the future of the gaming industry. Join us as we explore who these Game Changers are, what they've built, and what they're doing now.

Are you ready to level up your understanding of the gaming industry? Subscribe now to "Game Changers" and embark on a journey that goes beyond the screen to uncover the stories behind the gaming world.