Search

Weekly Download #5

Weekly Download #5

  • Share this:

As the battle for your browser heats up, Perplexity’s new AI-powered challenger promises to outsmart Google Chrome at its own game. In the UK, a relentless cyber attack cripples legal aid services, threatening access to justice for the most vulnerable. Meanwhile, Denmark draws a bold line with new laws banning deepfake doppelgängers, setting a precedent for digital identity rights. This week, the stories remind us: in a world remade by code, where do we draw the line between innovation and harm?

 

In this edition:

☄️ Comet Blazes a Trail: Perplexity’s AI Browser Takes on Google Chrome

🛡️ Ongoing Cyber Attack Leaves Legal Aid in Crisis

🧑‍⚖️ Licence to Ill-ustrate? Denmark’s Bold Move to Outlaw Deepfake Doppelgängers

 

Comet Blazes a Trail: Perplexity’s AI Browser Takes on Google Chrome

 

Perplexity AI, a start-up backed by tech heavyweights including Nvidia, Jeff Bezos and SoftBank, has launched an ambitious new AI-powered web browser called Comet, taking direct aim at Google Chrome’s dominance.

Built on Chromium, Comet transforms how people navigate the web. Instead of jumping between countless tabs and links, users can interact through natural language, asking questions, comparing products, automating tasks or summarising articles with help from the built-in Comet Assistant. Everyday chores like drafting emails, booking meetings or planning trips can all be handled through conversational prompts.

For now, the browser is exclusive to Perplexity Max subscribers, who pay around £160 a month. A broader roll-out is planned via invite this summer. In a move likely to reassure privacy-conscious users, Perplexity says Comet stores data locally and does not feed personal browsing histories into its AI training systems.

The company has not escaped criticism, however. Major publishers including News Corp and Dow Jones have raised concerns that Perplexity’s AI uses their content without permission. In response, the start-up has launched a new publisher partnership scheme in hopes of easing tensions.

As rivals like OpenAI and Microsoft race to release their own AI browsers, Comet signals the start of an intense battle for the future of web search and browsing.

 

🔴Copyright Risk Due to Unauthorised Use of Publisher Content

Perplexity’s Comet browser reportedly uses content from major news publishers such as News Corp and Dow Jones without obtaining proper licences. This practice may infringe copyright laws in the UK, the EU, and the US, which protect the exclusive rights of content creators and distributors. If challenged, Perplexity could face legal action, including financial penalties and injunctions to cease use of the material. This issue reflects broader regulatory concerns over the use of copyrighted material in training AI systems, which is currently under increased scrutiny globally. Failure to address this risk could damage Comet’s business and reputation.

 

🟡 Privacy Issues Relating to Data Handling and Transparency

Although Comet states that user data is stored locally and personal browsing histories are not used for AI training, the processing of user information still falls within the scope of data protection laws such as the UK GDPR and the EU GDPR. Lack of clear communication regarding data use or insufficient safeguards could lead to enforcement actions by regulators like the Information Commissioner’s Office (ICO). Additionally, concerns around algorithmic transparency and fairness are gaining regulatory attention, meaning that Comet’s AI systems may require careful governance to ensure compliance.

 

🟢 Positive Innovation with Potential to Set Ethical Standards

Comet’s use of conversational AI to enhance browsing and automate routine tasks represents a notable innovation aligned with current technological trends. If Perplexity adopts transparent practices and provides users with control over their data and AI interactions, it may establish itself as a leader in responsible AI deployment. Incorporating features that promote explainability and user feedback could further strengthen its position in a market increasingly focused on ethical technology.

 

Ongoing Cyber Attack Leaves Legal Aid in Crisis

The Legal Aid Agency (LAA) remains severely disrupted months after suffering a significant cyber attack that compromised sensitive data stretching back to 2010. Despite emergency measures, key digital services, including online application and invoicing systems, are still offline, leaving barristers, solicitors and vulnerable clients facing delays and uncertainty.

The breach, which emerged in April, saw hackers steal the personal details of hundreds of thousands of people who have applied for legal aid over the last 15 years. The Ministry of Justice confirmed the data theft in May and has since secured legal injunctions to prevent the release of stolen information. However, there is still no clear timeline for full system restoration, prompting concern within the legal sector that the situation may drag on until at least the end of the year.

In the meantime, the LAA has introduced contingency arrangements to keep payments flowing to legal aid providers. Many now rely on average-based payments and manual processes, placing extra strain on already overstretched lawyers and courts.

While the government insists it is doing everything possible to resolve the crisis, critics warn that prolonged disruption risks undermining access to justice for those who need it most. The investigation into the attack remains ongoing.

 

🔴 Severe Data Breach and Access to Justice Disruption

The Legal Aid Agency suffered a major cyber attack that exposed sensitive personal data of hundreds of thousands of legal aid applicants, some dating back over 15 years. This breaches key UK laws like the UK GDPR and Data Protection Act 2018, which require strong protection of personal information. Regulators such as the Information Commissioner’s Office (ICO) and the National Cyber Security Centre (NCSC) are closely monitoring the situation. Beyond data privacy, the attack has paralysed essential online systems used by solicitors and barristers, delaying payments and applications. This disrupts access to justice, a right protected under the Human Rights Act 1998, and risks legal challenges against the government. The ongoing media and political scrutiny add further reputational pressure on the Ministry of Justice to act swiftly and transparently.

 

🟡 Potential Legal Claims and Regulatory Fallout

There is a medium risk of legal claims from solicitors, barristers, or claimants affected by delays or data exposure, especially if investigations find inadequate security measures. Civil claims for negligence or breach of contract could arise in UK courts. Meanwhile, government and public bodies face mounting pressure to upgrade cyber defences and improve crisis responses. The incident serves as a warning to all organisations handling sensitive data about the growing threat of cyber attacks and the importance of robust protection.

 

🟢 Positive Emergency Response and Good Practices

In response, the Legal Aid Agency quickly introduced emergency payment schemes and contingency plans to keep the legal aid system functioning. They have notified affected individuals and regulators, complying with their legal duties under UK data protection laws. Many legal professionals have adapted to these temporary arrangements to support clients despite the disruption. While these steps demonstrate some good practice, a full system review and upgrade will be necessary to prevent future risks.

 

 Licence to Ill-ustrate? Denmark’s Bold Move to Outlaw Deepfake Doppelgängers

Denmark is poised to become the first country in Europe to grant its citizens automatic copyright over their own face, voice and likeness – a bold move aimed squarely at the deepfake threat. Culture Minister Jakob Engel‑Schmidt unveiled plans last week to amend copyright law, giving individuals legal control over digital imitations created without consent, including realistic copies of artists’ performances.

The proposed law, backed by a broad parliamentary majority, would make it illegal to share AI‑generated images, videos or audio depicting a person without their express permission. It also mandates that platforms remove unauthorised deepfakes swiftly, with “severe fines” under the EU Digital Services Act for non‑compliance.

Yet, the reform includes carefully drawn exemptions for satire and parody, acknowledging their role in democratic discourse. According to Engel‑Schmidt, the legislation sends a powerful message: “Everybody has the right to their own body, their own voice and their own facial features.”

The bill is due for consultation over the summer and could be formally introduced in autumn, with a view to becoming law by late 2025 or early 2026. Denmark intends to push the proposal across Europe during its upcoming EU presidency: a clear signal that identity rights in the age of generative AI are becoming front‑and‑centre on the regulatory agenda.

 

🔴 High Risk of Infringement for Unauthorised Deepfake Use

Denmark’s new proposal would make it illegal to share AI-generated images, videos or audio that imitate someone’s face or voice without their clear permission. This would effectively give people automatic copyright over their own likeness, under Danish copyright law and enforced through the EU Digital Services Act. Businesses, content creators and online platforms that host or share deepfakes could face fines or takedown demands if they fail to act. Regulators across Europe are watching the deepfake trend closely, so ignoring this could lead to severe legal and reputational damage.

 

🟡 Uncertain Boundaries for Satire and Parody

The law aims to protect free speech by allowing satire and parody, but what counts as ‘parody’ can be open to debate. Journalists, comedians and artists may risk disputes if their work is challenged as an unlawful imitation rather than a legitimate satire. This sits within wider EU copyright principles that protect freedom of expression under the European Convention on Human Rights. If not clarified, this grey area could create a chilling effect on creative works or spark costly legal battles.

 

🟢 Emerging Trend Towards Stronger Identity Protection

Denmark’s plan signals a growing European trend towards giving people more control over how their image, voice and data are used – especially as generative AI becomes more advanced. This aligns with wider EU goals under the Digital Services Act and proposed AI laws, which demand greater transparency and accountability from tech firms. Companies that respect these new rights, seek consent up front, and remove unlawful content quickly could build trust and show leadership in responsible AI use.

Categories: Weekly Download
Call Now
Get a Quote