The Threats and Opportunities of AI-Generated Content
When fake, AI-generated images of Taylor Swift spread virally across social media, the incident highlighted a concerning reality: AI-generated content has evolved from a novel technological feat into a force threatening public trust, creative authenticity, and social stability. Generative AI’s capacity to produce large quantities of text, images, and audio at scale is changing industries, education, and public discourse. The new AI platform brings a host of concerns: the potential displacement of human workers, the erosion of intellectual property rights, and the easy generation of misinformation. Educational institutions face increasing intellectual dishonesty, artists face AI images’ potential impact on their business, and the public faces a flood of misinformation and a breakdown of trust. As AI-generated content proliferates throughout the internet, it becomes necessary to understand the real impact that machine learning has on trust, authenticity, and industry.
The modern state of AI descends from the development of many different machine learning architectures. In 2014, the introduction of Generative Adversarial Networks (GANs) marked a pivotal moment, enabling AI systems to create new content based on existing datasets1. This was followed by the development of diffusion models in 2015, which, combined with pre-existing training data and advanced encoding, made high-resolution image generation possible by 20222. The CLIP model further advanced AI capabilities by enabling the relation of text to images and facilitating three-dimensional “thinking”1. Another important milestone is the 2017 introduction of the Transformer architecture, which led to the development of powerful models like GPT-43, revolutionizing natural language processing and generation. These advancements laid the groundwork for artificial intelligence’s rapid growth and widespread impact.
The effects of AI proliferating have shown both positive and negative impacts on public creativity. The industrial capacity of AI models enables others to generate content that would undermine traditional artistic processes. Jiang et al.4 comment that “It is now possible for anyone to create hundreds of images in minutes…and [start] a project for a successful Kickstarter campaign in a fraction of the time it takes for an actual artist”. This could have negative economic impacts on creatives. Concerns exist that the pre-existing data used to train many common AI models contains copyrighted material. Large datasets like LAION-5B, which heavily relies on user-generated content from online platforms5, have sparked controversies over copyright infringement and the ethical use of artists’ work. However, some artists view AI as a beneficial tool to augment their work rather than replace it. De Cremer et al.6 suggest that AI-augmented content creation may support creatives and lower barriers to entry into the art world.
In the education sector, large language models (LLMs) have shown potential to stimulate critical thinking, automate routine tasks, and assist in constructing arguments and assessment questions7. They can serve as study tools, provide homework assistance, and aid in creative writing8. However, these uses coexist with concerns about potential inaccuracies, plagiarism, and the loss of essential skills like information synthesis8.
Regarding AI misuse, malicious actors have new tools for generating misinformation. For instance, deepfakes—now augmented by AI—have the potential to spread defamatory content across the internet rapidly9. Real-world examples, such as the spread of AI-generated misinformation during Indian elections10, presents a threat to public trust. Fortunately, there is growing interest in mitigation strategies to maintain an informed public. Data provenance, which involves embedding information about a work’s origin, is becoming increasingly favored by policymakers to combat misinformation11. A key initiative in this area is the Coalition for Content Provenance and Authenticity, an organization composed of top brands like Adobe, Intel, and Microsoft to develop standards for embedding creation details, editing history, and usage rights as data provenance into images12. However, privacy concerns surround data provenance initiatives because governments could abuse the systems and identify would-be anonymous dissenters13. Although C2PA and other data provenance projects are important first steps in combating AI disinformation, more comprehensive frameworks are being developed to address the difficulties presented by this quickly developing technology.
Additional regulatory efforts are beginning to take shape to address the challenges posed by AI-generated content. President Biden’s executive order on AI in the United States aims to increase transparency and mandate disclosure of relevant information14. Similarly, the European Union’s AI Act seeks to establish a comprehensive regulatory framework for AI technologies11. As AI continues to grow, the combined efforts of governments, industry leaders, and legislators will all be critical in mitigating its misuse and preserving public trust in the digital age.
AI is developing rapidly, and companies like OpenAI, Anthropic, StabilityAI, and ElevenLabs stand at the forefront of innovation and its consequences. Large concerns about misinformation exist, particularly in the image and audio space. ElevenLabs, which makes AI text-to-speech models, began taking steps to suppress abuse in 202315. There is a growing integration of AI across various industries like television, where AI was used to make the opening sequence for Marvel’s Secret Invasion4. Public opinion remains divided, with some viewing AI as a tool for innovation and others perceiving it as a threat to jobs and creativity.
The discourse regarding generative AI has established a clear divide between its potential benefits and associated risks. As AI technology becomes more sophisticated, it has increasingly permeated various sectors, from media to education. AI-generated content is a detriment to society because it threatens public trust in online information, harms creators’ livelihoods, undermines academic integrity, and helps perpetuate harmful societal biases.
AI-generated content threatens the public’s trust in institutions by facilitating the spread of misinformation. As advances in AI allow synthetic content to replicate genuine content closely, the potential for spreading misinformation escalates. A report by the Electronic Privacy Information Center illustrates this risk by warning, “AI-generated images and videos provide several ways for bad actors to impersonate, harass, humiliate, exploit, and blackmail others.” The report also describes how AI misinformation can harm the public in many fields, including psychologically and reputationally 9. These harms are already evident. In February 2023, a manipulated audio clip falsely suggesting a Nigerian politician was planning to rig an election circulated on social media16.
Similarly, smear campaigns targeting U.S. politicians used AI-generated images and videos to erode confidence in public leaders16. Beyond governmental institutions, AI content also harms celebrities and creators. Anonymous users were able to generate and distribute sexually explicit photos of the pop singer Taylor Swift using deepfake technology, which impacted her reputation as a result17. As AI tools increase, the public will find it harder to distinguish between truth and fiction, which could undermine trust in media and digital platforms.
AI-generated content also harms creators’ livelihoods and infringes upon copyrights. Deep-learning companies have often trained their models on artists’ works without consent, allowing users to copy art styles and saturate the market with AI-generated imitations. Artists like Greg Rutkowski have expressed frustrations with image models replicating their style without permission 18. The proliferation of AI content has sparked outrage among other creators, like musician Nick Cave, who believe art can’t be mimicked and that algorithms cannot replace creativity19. AI text plagiarism even forced Clarkesworld, a science fiction magazine, to temporarily close submissions due to the influx of AI-generated short stories 20. The saturation of synthetic art devalues creative works and threatens livelihoods. The erosion of creative integrity harms individual creators and could have far-reaching effects on industries that rely on original content.
The impacts of AI-generated content in the creative sphere also resonate with AI’s effects in academia. The ability of large language models to generate massive amounts of coherent text at scale undermines academic rigor by assisting students in plagiarism and cheating. Because LLMs can only produce coherent text by training on original works, LLM content could be an infringement of intellectual property. Recent studies have found that GPT-4 and similar LLMs can perform well on academic exams and are more covert than outright copying 7. AI plagiarism in higher education and scientific literature is particularly growing. An investigation by Liang et al.21 found that across 2 thousand abstracts for papers related to computer science, nearly 18% of sentences contained probable AI-generated content. AI-assisted academic misconduct harms educational ethics and undermines the value of genuine knowledge.
Generative AI also perpetuates social biases present in the training data that AI models use. Despite efforts to reduce bias in AI systems, biases in training datasets inevitably make their way into AI-generated content. For example, the primary focus of the GPT series of LLMs is to reduce prejudice and improve safety 3. In addition, analysts at Bloomberg News studied the image model Stable Diffusion. They found that it often reflects common racial and gender biases, such as associating certain skin tones with certain professions or social roles. Expanding AI-generated content can continue biases and undermine fair treatment in the cultural zeitgeist. Despite the significant concerns regarding AI-generated content, there are still reasons to support generative AI.
While many creators oppose AI art, others believe it can help augment the creative process rather than replace artists outright. A common school of thought is that artists free themselves to focus on higher-order work by automating more repetitive tasks. Eapen et al.22 argue that “generative AI tools can solve an important challenge faced in idea contests: combining or merging a large number of ideas to produce much stronger ones.” They also assert that generative AI can potentially develop previously unknown solutions. Yale experts have also noted that as generative models advance, they become more capable of creating “fundamentally new” art paradigms and eliminating so-called “menial” tasks23. Rather than supplanting artists, generative AI may become another tool for creators and can democratize creativity for aspiring artists.
Further evidence in support of generative AI as a tool is clear when observing different markets. Generative AI offers a solution to repetitive jobs through automation. A case study of US K-12 teachers saw teachers reporting better productivity using generative AI for ideas and preparing teaching materials7. The benefits also extend into public communities as a whole. Small businesses, who often have to stand against larger enterprises in workloads, have seen empowerment and enhanced decision-making as a result24. Lowering for tasks like marketing, content creation, and customer service allows small businesses to compete more effectively. Generative AI can level the playing field for current and future workload demands.
In addition to helping teachers increase productivity, generative AI can amplify the educational process. While AI models may not be perfect, they can allow for a more focused implementation of an educational model for students. That a language model can complete academic assignments could mean that assignments need to integrate more than just information to remain relevant 7. Several educators in a scholarly study used LLMs like ChatGPT in a proactive manner to encourage higher-quality responses from their students 7. Modern education may need to adapt to the changing landscape of teaching tools.
While AI-generated content offers potential benefits for creativity and education, its risks and drawbacks are significant and far more dangerous. The threats to public trust, creators’ livelihoods, academic integrity, and social equality are serious concerns. As generative AI advances, the public must implement mitigation strategies for its negative effects. Ultimately, responsibly developing and using AI-generated content is a primary priority to ensure AI benefits society without compromising institutions.
Integrating generative AI into our daily lives may democratize artistry and improve productivity. However, the enthusiasm for this new technology overlooks AI’s limitations, including the inability to produce original work. Recognizing the constraints of AI is essential to addressing the greater implications of generative AI on society.
While AI can assist creativity by augmenting artwork, it cannot replace human creativity. Generative models can only produce an amalgamation of their inputs, rather than creating original content. For example, the paper for the Latent Diffusion model that developed into Stable Diffusion describes how high-resolution images are compressed into a format the model can work with. The model combines and transforms this data, but cannot create fully independent pictures from what the model trained on 2. This invalidates the notion of creative augmentation; a model that cannot create original content cannot make original contributions. In line with this logic, the United States Copyright Office rejected an artist’s registration for a graphic novel augmented by a generative model, setting the precedent that AI is not original enough to copyright25. This evidence shows that AI tools, even if humans guide them, ultimately devalue human creativity.
Proponents of generative AI as a medium cite practical applications of generative AI, such as automating programming workflows, assisting in creative tasks, and generating ideas. However, these claims overlook the real-world limitations of AI. For example, many programmers find GitHub Copilot, a language model that helps with programming, unreliable and inaccurate, writing code with subtle issues and inefficiencies. One programmer stated that Copilot often “wastes time or flat out breaks [programmers’] code”26. Similarly, AI-generated art lacks originality and coherence. Forbes describes how AI art is often littered with small mistakes, especially with human fingers. Closer inspection of seemingly innocuous AI images reveals “unsettling elements” that show a lack of thought, originality, and creativity27. These limitations show that while generative AI may assist in brainstorming or prototyping, it can be more problematic than functional when applied to real-world tasks.
In addition to its impracticality in the creative space, generative AI’s educational benefits are equally inflated. While generative models increase productivity for teachers by helping with grading or lesson plans, they can lead to over-dependence and reduce critical thinking skills among students. In addition, the risk of generative models producing false or misleading content can promote confusion and a disconnect from educational goals. A study of Chinese university students’ perceptions of AI indicated that over 60 percent of participants were concerned that overusing language models would hinder critical thinking and creativity and that the generated content would be inaccurate or misleading28. Using AI as a supplementary tool offers advantages in controlled environments, but educators must recognize the potential for overuse and the consequences it could have on students’ development.
AI-generated content growth is a complex challenge for modern society. The advantages of generative AI for productivity, creativity, and education do not justify its significant risks to public trust, creative integrity, and academic rigor. The ability to mass-produce synthetic content allows malicious users to flood digital spaces with misinformation while devaluing human originality and intellectual property rights. As AI systems continue to develop, society must build safeguards and regulatory frameworks to protect against these dangers. The future of digital content depends on maintaining a balance between innovation and preservation of authenticity.
Footnotes
-
Jiayang Wu et al. AI-Generated Content (AIGC): A Survey. Mar. 25, 2023. arXiv: 2304.06632[cs]. URL: http://arxiv.org/abs/2304.06632 (visited on 08/25/2024). ↩ ↩2
-
Robin Rombach et al. High-Resolution Image Synthesis with Latent Diffusion Models. Apr. 13, 2022. arXiv: 2112.10752[cs]. URL: http://arxiv.org/abs/2112.10752 (visited on 09/11/2024). ↩ ↩2
-
OpenAI et al. GPT-4 Technical Report. Mar. 4, 2024. DOi: 10.48550/arXiv.2303.08774. arXiv: 2303.08774[cs]. URL: http://arxiv.org/abs/2303.08774 (visited on 08/14/2024). ↩ ↩2
-
Harry H. Jiang et al. “AI Art and its Impact on Artists”. In: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. AIES ’23: AAAI/ACM Conference on AI, Ethics, and Society. Montréal, QC Canada: ACM, Aug. 8, 2023, pp. 363–374. iSBN: 9798400702310. DOi: 10.1145/3600211.3604681. URL: https://dl.acm.org/doi/10.1145/3600211.3604681 (visited on 08/26/2024). ↩ ↩2
-
Andy Baio. Exploring 12 Million of the 2.3 Billion Images Used to Train Stable Diffusion’s Image Generator. Waxy. Aug. 30, 2022. URL: https://waxy.org/2022/08/exploring-12-million-of-the-images-used-to- train-stable-diffusions-image-generator/ (visited on 09/15/2024). ↩
-
David De Cremer, Nicola Morini Bianzino, and Ben Falk. “How Generative AI Could Disrupt Creative Work”. In: Harvard Business Review (Apr. 13, 2023). URL: http://archive.today/Nbe3u (visited on 09/15/2024). ↩
-
Kyrie Zhixuan Zhou et al. “The teachers are confused as well”: A Multiple-Stakeholder Ethics Discussion on Large Language Models in Computing Education. Jan. 22, 2024. arXiv: 2401.12453[cs]. URL: http://arxiv.org/abs/2401.12453 (visited on 08/25/2024). ↩ ↩2 ↩3 ↩4 ↩5
-
Shaikh Zikra Riyaz and Shaikh Suvaid Salim. “Google’s Bard and Open AI’s ChatGPT: Revolutionary AI Technologies and Their Impact on Education”. In: IJARSCT (July 19, 2023), pp. 195–202. iSSN: 2581-9429. DOi: 10.48175/IJARSCT-12127. URL: http://ijarsct.co.in/Paper12127.pdf (visited on 09/04/2024). ↩ ↩2
-
Grant Fergusson et al. Generating Harms: Generative AI’s Impact & Paths Forward. Electronic Privacy Information Center, May 2023. URL: https://epic.org/documents/generating-harms-generative-ais-impact- paths-forward/ (visited on 09/02/2024). ↩ ↩2
-
Kiran Garimella and Simon Chauchard. “How prevalent is AI misinformation? What our studies in India show so far”. In: Nature 630.8015 (June 2024). Bandiera_abtest: a Cg_type: Comment Publisher: Nature Publishing Group Subject_term: Policy, Politics, Society, pp. 32–34. DOi: 10.1038/d41586-024-01588-2. URL: https://www.nature.com/articles/d41586-024-01588-2 (visited on 09/03/2024). ↩
-
Shayne Longpre et al. Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them? Aug. 30, 2024. arXiv: 2404.12691[cs]. URL: http://arxiv.org/abs/2404.12691 (visited on 09/04/2024). ↩ ↩2
-
Coalition for Content Provenance and Authenticity. C2PA Explainer. C2PA Specifications. URL: https://c2pa.org/specifications/specifications/1.4/explainer/Explainer.html (visited on 09/15/2024). ↩
-
Coalition for Content Provenance and Authenticity. C2PA Harms Modelling. C2PA Specifications. URL: https://c2pa.org/specifications/specifications/1.0/security/Harms_Modelling.html#_harms_misuse_and_abuse_initial_assessment (visited on 11/18/2024). ↩
-
United States, Executive Office of the President [Joe Biden]. “Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”. In: Federal Register 88.210 (Oct. 30, 2023), pp. 75191–75226. URL: https://www.federalregister.gov/d/2023-24283 (visited on 09/14/2024). ↩
-
Arijeta Lajka. New AI voice-cloning tools ’add fuel’ to misinformation fire. AP News. Feb. 10, 2023. URL: https://apnews.com/article/technology-science-fires-artificial-intelligence-misinformation-26cabd20dcacbd68c8f38610fec39f5b (visited on 08/25/2024). ↩
-
Funk Shahbaz and Vesteinsson. The Repressive Power of Artificial Intelligence. Freedom House, 2023. URL: https://freedomhouse.org/sites/default/files/2023-11/FOTN2023Final.pdf (visited on 09/24/2024). ↩ ↩2
-
Cat Zhang. The Swiftie Fight to Protect Taylor Swift From AI. The Cut. Jan. 26, 2024. URL: https://www.thecut.com/2024/01/taylor-swift-ai-deepfake-trending-social-media.html (visited on 09/24/2024). ↩
-
Melissa Heikkilä. “This artist is dominating AI-generated art. And he’s not happy about it.” In: MIT Technology Review 125.6 (Sept. 16, 2022). Publisher: MIT Technology Review, pp. 9–10. iSSN: 2749649X. URL: https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/ (visited on 09/23/2024). ↩
-
Nick Cave. I asked Chat GPT to write a song in the style of Nick Cave and this is what it produced. The Red Hand Files. Jan. 16, 2023. URL: https://www.theredhandfiles.com/chat-gpt-what-do-you-think/ (visited on 09/25/2024). ↩
-
Neil Clarke. A Concerning Trend. Dec. 15, 2023. URL: https://neil-clarke.com/a-concerning-trend/ (visited on 09/15/2024). ↩
-
Weixin Liang et al. Mapping the Increasing Use of LLMs in Scientific Papers. Apr. 1, 2024. arXiv: 2404.01268[cs]. URL: http://arxiv.org/abs/2404.01268 (visited on 09/30/2024). ↩
-
Tojin T. Eapen et al. “How Generative AI Can Augment Human Creativity”. In: Harvard Business Review (July 1, 2023). Section: AI and machine learning. iSSN: 0017-8012. URL: https://hbr.org/2023/07/how-generative-ai-can-augment-human-creativity (visited on 09/30/2024). ↩
-
Kayla Yup. What AI art means for society, according to Yale experts. Yale Daily News. Jan. 23, 2023. URL: https://yaledailynews.com/blog/2023/01/23/what-ai-art- means-for-society-according-to-yale-experts/ (visited on 09/11/2024). ↩
-
Elijah Clark. How Small Businesses And Entrepreneurs Can Benefit From AI. Forbes. Section: AI. URL: https://www.forbes.com/sites/elijahclark/2023/11/30/how- small-businesses-and-entrepreneurs-can-benefit-from-ai/ (visited on 10/01/2024). ↩
-
Jon Brodkin. US judge: Art created solely by artificial intelligence cannot be copyrighted. Ars Technica. Aug. 21, 2023. URL: http://archive.today/OlmZZ (visited on 10/18/2024). ↩
-
Dylan Huang. I Reviewed 1,000s of Opinions on Github Copilot. konfig. Oct. 25, 2023. URL: https://konfigthis.com/blog/github-copilot/ (visited on 10/18/2024). ↩
-
Dani Di Placido. The Problem With AI-Generated Art, Explained. Forbes. Section: Arts. URL: https://www.forbes.com/sites/danidiplacido/2023/12/30/ai- generated-art-was-a-mistake-and-heres-why/ (visited on 11/18/2024). ↩
-
Cecilia Ka Yuk Chan and Wenjie Hu. Students’ Voices on Generative AI: Perceptions, Benefits, and Challenges in Higher Education. Version Number: 1. 2023. DOi: 10.48550/ARXIV.2305.00290. URL: https://arxiv.org/abs/2305.00290 (visited on 10/22/2024). ↩