Weekly Download #3
AI is breaking boundaries – in courtrooms, hospital wards, and […]
Search
An AI band took the internet by storm – until fans found out it didn’t exist. While machines play at being human, Parliament rewrites the rules of the digital economy, and the SNP cuts WhatsApp from official phones in a move to lock down sensitive chats. This week’s headlines ask: when tech shapes everything, who’s really in control?
In this edition:
🎶 Fake It Till You Stream It: The Velvet Sundown
📜 Data (Use and Access) Act Receives Royal Assent: UK Signals New Data Era
🔒Message Received: SNP Puts WhatsApp on Mute
Fake It Till You Stream It: The Velvet Sundown
A retro-rock band called The Velvet Sundown has drawn widespread attention on Spotify, reaching close to a million monthly listeners. However, the band’s origins are not traditional; The Velvet Sundown was created entirely using artificial intelligence.
The music, characterised by 1970s-style rock and moody visuals, raised suspicions after social media users noted inconsistencies in promotional images, including AI-generated artifacts. Eventually, a man using the name Andrew Frelon identified himself as the project’s spokesperson, describing it as an AI-generated art experiment. Speaking to CBC News, he said the music and visuals were created using tools like Suno, with the aim of examining how convincingly an artificial act could enter the mainstream.
Frelon referred to the project as a “media hoax” and a statement on modern music consumption. Spotify has since updated the band’s profile, labeling Frelon a “fictional narrative” and describing the band as an “artistic provocation.” The case has prompted renewed debate within the music industry. Artists, labels, and advocacy groups have called for greater transparency around AI-generated content, including clearer labeling on streaming platforms and stronger protections for human creators.
The Velvet Sundown case is now central to ongoing discussions about AI’s role in the future of music.
🔴 Possible Breach of Misleading Advertising Laws
The creators of The Velvet Sundown didn’t tell listeners the music was made with AI until after people noticed odd details. In the US and Canada, laws like the FTC Act and Competition Act require businesses to be honest with the public. If people were misled into streaming the music, buying merch, or promoting the band thinking it was human-made, this could count as false advertising. Regulators are starting to pay attention to AI “hoaxes” like this – especially when they involve money or public trust.
🟡 AI Training On Old Rock Songs Could Lead to Copyright Trouble
If the AI tools used to create the music were trained on real 1970s rock songs without permission, that could break copyright laws in both the US and Canada. Even if the final songs aren’t direct copies, they might still be considered too close to the original music. Rightsholders could sue, arguing that the AI-created songs are derivative works. These types of cases are already being argued in court and could shape how AI music is made in the future.
🟢 Spotify’s Labelling Suggests New Rules Are Coming for AI Music
After the truth came out, Spotify updated the band’s page to say it was an “artistic provocation” and part of a fictional project. That move lines up with growing pressure on platforms to label AI content more clearly. New laws like the EU’s Digital Services Act and proposals in the US could soon require labels on AI-generated content so listeners know what they’re hearing. Even if it’s not a legal rule yet, platforms are starting to act now to avoid backlash and future lawsuits.
Data (Use and Access) Act Receives Royal Assent: UK Signals New Data Era
The UK has taken a decisive step into its post-GDPR future. Just over a week ago, the Data (Use and Access) Act officially became law, marking the biggest shake-up to the country’s privacy framework since Brexit. Designed to streamline compliance while still upholding fundamental rights, the legislation reworks key parts of the UK GDPR and the Data Protection Act 2018.
Among the most notable shifts: companies can now rely on predefined “legitimate interests”, such as fraud prevention or safeguarding vulnerable people, without conducting a full legal balancing test. Automated decision-making, once heavily restricted, is now allowed with proper human oversight, opening doors for more AI integration, so long as sensitive data isn’t misused.
Research rules are also being modernised. Commercial projects can now fall under the umbrella of “scientific or statistical research,” unlocking new potential for data-driven innovation. Even cookie banners may start to fade, as low-risk analytics tracking will no longer always require consent.
To enforce these changes, the UK’s data watchdog is being rebranded as the Information Commission, with sharper powers and a broader scope. The government says this overhaul will make UK data law “fit for the future.” Businesses should start preparing – ready or not, the rules are changing.
🔴 New UK Rules May Put EU Data Transfers at Risk
The UK’s new data law makes it easier to use personal data without asking for consent, such as for fraud detection or basic analytics. These changes move the UK further away from the EU’s stricter GDPR standards. If the EU decides the UK no longer provides enough protection, the current data adequacy agreement could be revoked. That would create serious legal and operational issues for UK companies that handle EU customer data.
🟡 More Use of AI in Decisions Could Lead to Fairness Complaints
The law now allows more automated decision-making, as long as there is human oversight. People affected by these decisions – such as in hiring, credit scoring, or healthcare – may still challenge them if they believe the outcome was unfair or biased. Companies using AI will need to ensure their systems are explainable, non-discriminatory, and clearly supervised to reduce legal risks.
🟢 Looser Research Rules Can Help Innovation
Businesses now have more freedom to use personal data for research, including in commercial projects. This change supports innovation in fields like health tech, AI, and financial services. To avoid legal issues, companies must be transparent about how data is used, limit it to the original purpose, and apply strong safeguards like anonymisation and data minimisation.
Message Received: SNP Puts WhatsApp on Mute
The Scottish Government has announced a sweeping ban on WhatsApp and similar messaging apps from all official mobile devices, aiming to rebuild public trust after a damaging scandal involving deleted pandemic-era communications.
The new policy, set to take effect by spring 2025, follows heavy criticism over the erasure of key WhatsApp messages by senior SNP figures, including Nicola Sturgeon and John Swinney. The UK Covid Inquiry previously condemned the widespread deletion of correspondence as obstructive, calling it “industrial-scale” and detrimental to public accountability.
Deputy First Minister Kate Forbes said the decision follows an independent review and reflects a broader push for transparency. Government staff will be required to conduct official business through secure, auditable platforms such as Microsoft Teams and email. Guidance and training will accompany the rollout to ensure compliance.
However, critics remain wary, warning that banning apps from work phones might merely shift sensitive discussions to personal devices – still outside the reach of Freedom of Information laws and official scrutiny.
The move follows renewed pressure on ministers, after previously undisclosed messages were released under mounting public and legal pressure. It marks a significant step in the SNP’s effort to restore confidence after months of controversy.
🔴Deleted Messages May Breach Public Records and Inquiry Obligations
The erasure of WhatsApp messages by Scottish Government officials during the pandemic may violate the Freedom of Information (Scotland) Act 2002 and Public Records (Scotland) Act 2011, which require retention of records related to official duties. Deleting communications relevant to public health decisions, especially during a national crisis, could be viewed as obstruction of a public inquiry or non-compliance with statutory record-keeping duties. This creates exposure to legal scrutiny, reputational harm, and potential misconduct allegations.
🟡 Work Phone Restrictions Leave Gaps Around Personal Device Use
The new ban covers only official mobile devices. If ministers or staff move sensitive discussions to personal phones or private apps, those communications may still fall outside the scope of FOI laws and formal audits. This presents a legal inconsistency, weakening transparency and exposing the government to future challenges over incomplete disclosures, especially in litigation or public inquiries.
🟢 Secure Communication Platforms Improve Legal Compliance and Auditability
Mandating the use of tools like Microsoft Teams and official email strengthens compliance with information governance and public accountability standards. These platforms offer audit trails, retention policies, and access controls, which support legal duties around transparency and help preserve records for investigations or legal review. To be effective, this shift will require consistent policy enforcement, staff training, and oversight to ensure records are properly captured across all channels.