Artificial Intelligence (AI) seems like a concept of the future, belonging to a world where science fiction is reality and computer programs are just as smart, if not smarter, than human beings. For the majority of human history, it has been just that: science fiction. While there have been advancements in the past, AI has remained rather rudimentary for a long time. In the past year, however, this has changed, as several incredibly noticeable leaps forward in the world of AI have occurred. Programs such as DALL-E 2 — an AI image-generator developed by OpenAI to create art based on user prompts — and ChatGPT — an AI chatbot (also by OpenAI) that similarly generates written samples from prompts — have become incredibly popular.
According to OpenAI, DALL-E 2 “can create original, realistic images and art from a text description. It can combine concepts, attributes, and styles.” DALL-E 2’s main tasks are to expand already created images, can make edits to existing images, create variations of existing images and generate completely new images all based on text descriptions. DALL-E 2 is the successor to 2021’s DALL-E program and can generate more accurate images with a resolution of four times what the original could produce. Using these revolutionary programs, you could alter a preexisting painting to make it larger or even create a new piece of art that convincingly passes for human work — all with the power of AI.
Regardless of its strengths and remarkable ability to generate artwork, DALL-E 2 and similar programs have also faced skepticism and criticism. Many believe that these programs could be damaging to creatives and their careers by reducing a company or individual’s need to commission art from working artists through what possibly amounts to intellectual theft. DALL-E uses a diffusion model, meaning it looks at other images, deconstructs them and then recreates them to find statistical patterns within them. Whenever DALL-E is used, it is taking elements from other works to implement into its own. Some artists are understandably cautious of the program. Their livelihood is being automated by a program that takes ample inspiration from their art without giving credit to their hard work and skills. While these programs are still fairly basic, they are improving at such a fast rate that these concerns may quickly become more pressing.
ChatGPT is another incredibly popular program met with similar problems. ChatGPT is a computer program based on user prompts to generate writing, demonstrating seemingly endless writing capabilities with the ability to generate recipes, guitar riffs, news articles, and academic essays. ChatGPT is what is known as a large language model, and has been trained using articles, websites, books and other sources, though its knowledge is somewhat limited at the moment. The AI program is updated to 2021, so its knowledge ends with the information available in 2021 and the historical and scientific events that took place that year. This means that you may need to fact-check the text generated by the program at times if you were attempting to receive a specific answer.
While ChatGPT may be groundbreaking, it raises strikingly similar concerns to the DALL-E AI programs. By virtue of its design, ChatGPT is trained using information available on the internet, so the majority of what is being developed by the program is a modified version of somebody else’s writing. This has caused some concerns about whether writing from the program is ready for commercial or academic use. Whether detectable or not, ChatGPT is based on the work of thousands of writers that don’t get credit. Unfortunately, this issue does not stop in the worlds of writing and art. AI has endangered the livelihoods of musicians through the use of a nifty musical theory tool called “modulation.”
A new trend on TikTok has been seen using AI programs and modulation to generate convincingly realistic voices and entirely new music based on already famous artists. The most popular of these was a track created by the TikTok user Ghostwriter977 titled “Heart on my Sleeve.” The AI-generated song is meant to be a Drake song with a feature by The Weeknd. While some parts are noticeable if you have heard what these AI-generated and modulated voices sound like, the majority of the track is virtually indistinguishable from the artists’ typical work. Soon after the AI-synthesized track was released, it was removed from TikTok, YouTube and Spotify as it was struck with the most recent round of criticisms and concerns targeting AI models and programs. While some of these programs just use modulation to change your voice to sound similar to a celebrity, it seems they are improving so much that at times it is incredibly hard to tell the difference between the original and the fake.
According to a report by the Financial Times, Universal Music Group — the company that represents Drake and The Weeknd — sent a letter to major streaming services asking that they block AI platforms from being able to use their services to train on copyrighted songs. While that may seem like the music industry attempting to stay more exclusive while showing its fear of AI, their concerns are somewhat valid. “Heart on my Sleeve” is a very high-quality song, with some better writing of some AI program improvements, the world could soon have AI-generated platinum-worthy songs for any music artist.
As AI models advance, so do the criticisms levied against their ethicality. The discussion must stay ongoing as these programs develop and become stronger over the years. While I do not believe they will steal jobs from artists, writers or musicians for a long time, the possibility is there.