Generative AI has emerged as a technology disrupting the very fabric of what it means to create. Systems like the highly publicized ChatGPT have instantly become powerful everyday tools for people from all industries.
Yet, Generative AI is much deeper than just text prompts. That digital painting you just scrolled through on Instagram, did an artist create it, or was it an AI art generator like Midjourney or Stable Diffusion? Can you even tell anymore?
That crucial ability of being able to distinguish between AI and “human authorship” has become a hotly contested topic with even the US Copyright Office recently weighing in, ruling that art created solely by AI isn’t eligible for copyright protection.
The ruling has put the wheels in motion for what will inevitably be a long-drawn-out investigation and examination into all jurisdictions of copyright laws pertaining to all forms of AI generated content. What constitutes an original work? Will the analog rules of derivative vs transformative even matter at the end of it?
A highly public display of these loose interpretations was the recent AI generated deep fake song, “Heart On My Sleeve” featuring cloned voices of Drake and The Weeknd. The song took the world by storm over the course of a few days before their label, Universal Music Group, unleashed their army of DMCA and cease and desist lawyers and had the song pulled from all streaming sites.
Whether it is an image, an audio clip, or even a video, it is becoming increasingly difficult to discern what was created by a human or Generative AI, and we have not even got to 3D digital content yet, which is on fast follow trajectory to further muddy the waters.
Before getting into the potential implications of Generative AI and 3D content creation, it is important to take a side-step to understand the relationship between 3D digital assets and user-generated content (UGC).
The term UGC is not new. However, it is a term that has been increasing in popularity in recent years due to broader application. Historically, UGC has been associated with the gaming industry – fans and players of popular games developing new content, designs, gameplay, etc., hence the user in UGC.
While that connection will always be important, the term UGC has been adopted across other industries, so it is important to make the distinction. In fact, most digital media we consume in our daily lives is more often than not some form of UGC. Scrolling through TikTok, watching a YouTube tutorial, making a comment on one of Elon Musk’s Twitter posts – you guessed it, they are all forms of UGC.
Taking that step back, the world of 3D UGC has grown significantly over the past decade and has brought forth both opportunities and challenges for the digital creator economy. Global spending on virtual goods reached more than $100B in 2021 and that number continues to grow at an exponential rate with more games, platforms, and social experiences breaking down their walled gardens to not only embrace but encourage UGC.
The rise of Generative AI technologies has been like rocket fuel, increasing accessibility and democratizing creativity to the point where anyone with a computer and the ability to enter a text prompt can become a creator.
Platforms like Roblox, Minecraft, and Fortnite have enabled players to create, share, and even monetize their 3D digital art. However, this growth, in particular the selling of 3D digital art for profit, has exposed the limitations of traditional content moderation methods – manual review and basic automated systems that are ill-equipped to manage the complexity of 3D. These methods struggle to efficiently identify and assess the vast array of potential issues, such as copyright infringement, intellectual property theft, and inappropriate or offensive material.
Coming full circle, why does this all matter? Generative AI is disrupting every traditional creative practice on a trajectory that no one could have anticipated. Whether Generative AI ends up being the hero or the villain in this story remains to be seen, but the facts are that it will continue to drive an influx of a new age of creators, allow more content to be built than can be humanly moderated or authenticated, and provide even more fruit-bearing tools for bad actors to be, well bad.
So how do we stay ahead? How do we ensure artists are protected and rightfully attributed for their creativity? Believe it or not, the path to fast and scalable 3D moderation and authentication is not more human intervention, but through unlocking AI-powered technologies.
Yes, the same AI that is often depicted as doom and gloom, presents a promising path forward to enhance the speed, accuracy, and adaptability of 3D content moderation, and has motivated talented teams like Secur3D to pursue better ways of approaching old problems. Identifying the shortfalls in creator protection and asset moderation now will only help foster safer and thriving digital communities in the future.
Otis Perrick is the CEO of Secur3D. Secur3D software protects 3D digital art by moderating and authenticating 3D assets with accuracy and efficiency to stop theft and intellectual property (IP) infringement, while saving customers time and money by reducing human moderation efforts.
Leveraging artificial intelligence (AI) and proprietary technology, Secur3D analyzes and compares the unique 3D mesh structure and surface textures against asset libraries in real-time, bridging the alarming gap in moderation of 3D content, currently confined to the 2D realm.
Developed by a team of experts with over a decade of experience in building 3D assets and tools, games and apps, and enterprise software. With a mission to safeguard and protect creators and all original works.
Image: Generated by Midjourney
Leave a Reply