The intersection of generative technology and creative expression has reached a fever pitch. As large language models (LLMs) become increasingly sophisticated, the barrier to entry for content creation—be it prose, code, or poetry—has effectively vanished. However, when we apply this technological leap to the world of songwriting, we encounter a profound philosophical and technical dilemma. The title “What Is and Should Never Be” serves as a perfect framework for analyzing the current state of AI-assisted songwriting: acknowledging the undeniable reality of what the technology currently “is” while establishing a firm boundary for what it “should never be” if we are to preserve the sanctity of human expression.

The Current State of the Tech: What AI Lyrics Are Today
In the contemporary tech landscape, AI-generated lyrics are no longer a novelty; they are a standard output of sophisticated neural networks. To understand the “what is” of this industry, we must look at the underlying architecture that allows a machine to mimic the cadence of a human heart.
LLMs and the Architecture of Songwriting
At the core of modern lyrical AI are Transformer models. Unlike early Markov chain generators that produced nonsensical strings of text, modern LLMs utilize “attention mechanisms” to understand context, metaphor, and thematic consistency. When a user prompts a tool like ChatGPT, Claude, or specialized music software like Suno or Udio to write a song, the model isn’t “thinking.” Instead, it is performing a high-level statistical prediction. It analyzes patterns in vast datasets of existing lyrics to determine which word is most likely to follow another based on the requested genre, mood, or artist style.
This technology is currently capable of producing structurally perfect songs. It can follow AABA structures, identify internal rhyme schemes, and even suggest melodic contours that fit the lyrical meter. For developers and tech innovators, this represents a triumph of Natural Language Processing (NLP).
Data Scraping and the “Lyrical Database”
The efficacy of these tools relies entirely on the data they were trained on. The current tech landscape is built upon “The Lyrical Database”—millions of lines of copyrighted text scraped from the internet. This technical reality allows AI to mimic specific “voices” with haunting accuracy. If you ask an AI for a “grunge-style” lyric, the software identifies specific linguistic markers—themes of angst, specific rhythmic pauses, and a vocabulary favored by 90s songwriters—and synthesizes a new output. This is the “is”: a highly efficient, algorithmic mirror of human history.
The Boundary of Authenticity: What AI Should Never Be
While the technology is impressive, there is a growing consensus among developers, ethicists, and creators that certain lines must be drawn. If we treat technology as a replacement for human experience rather than an extension of it, we risk a “dead internet” theory applied to culture.
The Problem of Synthetic Emotion
Technology should never be a substitute for lived experience. This is the primary “should never be.” An AI can describe heartbreak by analyzing ten thousand songs about breakups, but it does not “know” the weight of loss. When we rely on algorithms to generate the core emotional message of a piece of art, we are essentially distributing “synthetic emotion.”
From a technical standpoint, this results in a “regression to the mean.” Because AI predicts the most likely next word, it inherently moves toward the average. It avoids the radical, the experimental, and the truly avant-garde unless specifically prompted to be “weird.” If the industry moves toward total automation, we risk a feedback loop where AI learns from AI, leading to a flattening of creative output that lacks the “glitch” or “error” that often defines human genius.
Cultural Appropriation and Algorithmic Bias
Another critical “should never be” involves the preservation of cultural nuances. Algorithms are mirrors of their training data, and that data is often biased. If a lyrical AI is trained predominantly on Western pop, its attempts to generate lyrics for genres rooted in specific marginalized experiences—such as Delta Blues or Reggae—can result in digital caricatures. Tech leaders have a responsibility to ensure that generative models do not become tools for algorithmic cultural appropriation, where the machine strips the historical weight from a genre to create a sanitized, “market-ready” version of its lyrics.
Intellectual Property and the Legal Frontier

The rapid advancement of lyrical AI has outpaced the legal frameworks designed to protect creators. This segment of the tech world is currently a “Wild West,” where the definition of “originality” is being rewritten in real-time.
Copyright in the Age of Generative AI
The technical process of “training” an AI involves making copies of protected works. This has led to massive legal battles regarding fair use. The central question for the tech industry is: Is an AI-generated lyric a “derivative work” or a “transformative work”?
Current US Copyright Office rulings suggest that work produced solely by an AI cannot be copyrighted. However, the tech is shifting toward “hybridity.” If a human uses an AI to generate a rhyming couplet but writes the rest of the stanza, who owns the IP? This ambiguity creates a risk for tech companies and platforms. We are seeing the emergence of “Content ID” style tools for lyrics—software designed to scan AI output to ensure it hasn’t accidentally plagiarized a human songwriter’s existing work word-for-word.
The Fight for Attribution and Human Rights
The tech community is currently debating the implementation of “opt-out” protocols for artists. The goal is to create a digital ecosystem where a songwriter can flag their lyrics as “do not train.” This involves complex metadata tagging and blockchain-based verification. Ensuring that technology respects human consent is a fundamental “should” that the industry must address to maintain public trust.
The Future Collaborative Model: Tech as a Tool, Not a Replacement
The most optimistic view of this technology sees AI not as a “songwriter,” but as a “co-writer.” The future of music tech lies in the development of tools that enhance human agency rather than erasing it.
AI-Assisted Ideation vs. Full Automation
The next generation of Digital Audio Workstations (DAWs) will likely feature integrated AI lyric assistants. However, the focus is shifting toward “ideation.” Instead of clicking a button to “Generate Song,” a writer might use a tech tool to “Suggest five synonyms for ‘blue’ that fit a trochaic meter.”
This level of tech serves as a digital “Oblique Strategies” deck—a way to break writer’s block. By focusing on assistive technology, developers can provide creators with a “thesaurus on steroids” that respects the human at the center of the process. This is the “is” we should strive for: a symbiotic relationship where the machine handles the labor of pattern recognition while the human handles the labor of meaning-making.
Building “Guardrail” Software for Creative Integrity
To prevent the “should never be” scenarios, the tech industry is developing “Creative Guardrails.” These are software layers that prevent AI from outputting lyrics that are too close to existing copyrighted material or that violate ethical standards (such as hate speech or deep-fake vocal/lyrical mimicry).
Innovative startups are now working on “Attribution Tech”—watermarking AI-generated text so that listeners and platforms know exactly what percentage of a song was assisted by an algorithm. This transparency is crucial for the digital economy. If we know what is machine-made, we can place a higher value on what is purely human-made.

Conclusion: The Ethics of Digital Resonance
The title “What Is and Should Never Be” is a reminder that in tech, “can” does not always mean “should.” We currently have the technology to fill the world with an infinite stream of perfectly rhymed, emotionally resonant, and completely hollow lyrics. This “is” our reality.
However, we must be vigilant about what this technology “should never be.” It should never be a replacement for the vulnerability of a human being trying to explain their world. It should never be a tool for the theft of intellectual labor. And it should never be a black box that obscures its own origins.
As we continue to develop AI tools for the creative arts, our focus must remain on the “Augmentation” of the human spirit. By building ethical frameworks, transparent attribution models, and assistive (rather than replacive) tools, the tech industry can ensure that the future of lyrics remains as soulful and surprising as its past. The machine can provide the rhyme, but only the human can provide the reason.
aViewFromTheCave is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com. Amazon, the Amazon logo, AmazonSupply, and the AmazonSupply logo are trademarks of Amazon.com, Inc. or its affiliates. As an Amazon Associate we earn affiliate commissions from qualifying purchases.