Search

Weekly Download #7

Weekly Download #7

  • Share this:

From online safety to oversharing, this week’s headlines reveal the fine print behind tech’s boldest promises. The UK’s Online Safety Act is now in force, triggering fierce debate over free speech, state surveillance, and a sudden boom in VPN interest. Meanwhile, OpenAI CEO Sam Altman warns users: ChatGPT isn’t your therapist – prompting urgent calls for legal reform as AI blurs into emotional support. In the dating world, Tea’s been spilled but not the kind you think: a major breach exposes over 72,000 private messages and images, once again proving that where there’s data, there’s danger.

 

In this edition:

🇬🇧 Click, Sign, Encrypt: Online Safety Act Sparks Backlash and VPN Boom

🔓No Privilege, No Privacy: The Hidden Risk of Using ChatGPT for Therapy

☕️ Spilled Tea: Dating App Breach Exposes 72,000 Images & Private Messages

 

Click, Sign, Encrypt: Online Safety Act Sparks Backlash and VPN Boom

The UK’s Online Safety Act has now come into force, introducing new legal obligations for tech companies to monitor and remove illegal and harmful online content. The legislation aims to protect users – especially children – from material such as child sexual abuse, terrorism, self-harm content, and online harassment.

Under the new rules, major platforms must carry out regular risk assessments and put in place measures to prevent users from encountering harmful material. Ofcom, the UK’s communications regulator, has been given enforcement powers, including the ability to issue fines of up to £18 million or 10% of global annual turnover.

However, the law has prompted concerns among digital rights groups and members of the public. Critics argue that it may lead to excessive censorship, surveillance, and curbs on free expression. A petition opposing the Act, launched on Parliament’s website, has gathered over 300,000 signatures. It calls for the government to amend or repeal the legislation, stating that it could give authorities too much control over legal online content.

Following the Act’s implementation, there has also been a notable increase in online searches for virtual private networks (VPNs), suggesting users may be seeking ways to maintain online privacy or access restricted content. VPNs allow individuals to mask their IP addresses and encrypt internet activity, which may circumvent some of the law’s intended controls.

Ofcom’s first enforcement actions are expected in the coming months, as it begins reviewing platform compliance and consulting on codes of practice.

 

🔴Freedom of Expression & Overblocking Risk under UK Human Rights Law

The Online Safety Act imposes broad duties on platforms to remove both illegal and “legal but harmful” content, particularly for children. This creates a significant risk of over-removal, where platforms err on the side of caution and remove lawful content. Such actions may infringe Article 10 of the European Convention on Human Rights, which guarantees freedom of expression. If challenged, the Act’s scope could be tested in UK courts or the European Court of Human Rights. There’s also risk of litigation if users or publishers allege unjustified takedowns or content suppression.

 

🟡 Privacy & Encryption Concerns under UK GDPR/Investigatory Powers Act

The increase in VPN use suggests that users may be trying to circumvent content monitoring, raising questions about state surveillance and platform data processing. If the Act’s enforcement mechanisms require excessive data retention or monitoring, it may conflict with the UK GDPR and Investigatory Powers Act 2016. The lack of clarity around how encrypted platforms (e.g. WhatsApp) are expected to comply could lead to privacy challenges. Ofcom’s future codes of practice will be critical in determining compliance obligations and may face legal scrutiny.

 

🟢 Strengthened Regulatory Framework with Global Alignment Potential

The Act gives Ofcom clear statutory powers, aligning the UK with international efforts to regulate Big Tech, such as the EU’s Digital Services Act. By mandating risk assessments, transparency reporting, and age verification, the law enhances platform accountability. If implemented proportionately, it could serve as a model for online harms regulation in other jurisdictions, particularly on child protection and content moderation.

 

No Privilege, No Privacy: The Hidden Risk of Using ChatGPT for Therapy

OpenAI CEO Sam Altman has warned users against relying on ChatGPT for therapeutic conversations, highlighting the absence of legal confidentiality protections. The caution comes amid growing concern over the platform’s use as an informal mental health tool, particularly among younger users who increasingly turn to AI for emotional support.

Unlike conversations with qualified therapists, doctors, or lawyers – where professional privilege applies – interactions with ChatGPT are not protected by confidentiality laws. As a result, personal disclosures made to the chatbot could potentially be accessed or disclosed in legal proceedings.

This issue is gaining prominence as OpenAI faces mounting legal scrutiny, including a lawsuit filed by The New York Times seeking to compel the company to retain user data, including chat histories, beyond its standard retention policy. Currently, OpenAI deletes most user conversations within 30 days unless flagged for safety or legal compliance.

The lack of regulatory frameworks governing AI-user interactions leaves a grey area where sensitive information may not be adequately safeguarded. Altman has called for the creation of new legal standards to reflect the evolving role of AI in personal and emotional contexts, noting that existing laws do not account for the rise of generative AI as a de facto support tool.

Until such frameworks are in place, users are advised to exercise caution when using ChatGPT or similar platforms for mental health or emotionally sensitive matters, as those interactions may not enjoy the same legal protections as traditional therapeutic settings.

🔴 Lack of Legal Confidentiality and Human Rights Implications

Users may disclose highly sensitive personal information to ChatGPT under the false impression of privacy. However, these interactions are not protected by legal privilege (e.g. doctor–patient or lawyer–client confidentiality). This poses a significant legal risk under Article 8 of the European Convention on Human Rights (right to respect for private life). If user data is accessed, retained, or disclosed during litigation, OpenAI or similar providers could face legal challenges over disproportionate interference with users’ privacy. There is also risk of reputational and regulatory fallout if platforms are perceived as misleading users.

 

🟡 Data Protection and UK GDPR Compliance Concerns

Processing and storing sensitive personal data – especially health-related or emotional disclosures – triggers heightened obligations under the UK GDPR, including transparency, purpose limitation, and data minimisation. If AI platforms retain this data without clear justification or informed consent, they could be in breach of the law. The situation is compounded by OpenAI’s litigation with the New York Times, which may require preserving user conversations beyond standard retention periods. Questions may also arise about the lawful basis for processing such data under Articles 6 and 9 UK GDPR.

 

🟢 Potential for Future Legal Frameworks and Regulatory Innovation

The case highlights the need for tailored legal frameworks recognising AI as a unique mode of communication, possibly warranting its own confidentiality protections. While current law is ill-equipped, this regulatory gap presents an opportunity for UK lawmakers to innovate, potentially setting international standards on AI–user privacy. OpenAI’s public acknowledgment of these issues may drive constructive legal reform in the sector.

 

Spilled Tea: Dating App Breach Exposes 72,000 Images & Private Messages

Tea, a women‑focused dating safety app that allows users to anonymously review men, has confirmed that roughly 72,000 images were stolen in a cyber‑attack on 25–26 July 2025. The compromised material includes approximately 13,000 selfie and photo ID images submitted during account verification, as well as around 59,000 images from user posts, comments and direct messages.

The company said only users who joined before February 2024 were affected and that no email addresses or phone numbers were compromised. Tea has engaged independent cybersecurity firms to investigate the breach, which was first uncovered by 404 Media. The leaked images reportedly appeared on the online forum 4chan before being removed.

Subsequently, security researchers revealed that the breach extended beyond images. Access was confirmed to have included more than 1.1 million direct messages, covering intimate topics such as abortion, cheating, divorce, rape and discussions of phone numbers or meeting locations. The messages date from February 2023 to July 2025. Tea has since taken the direct messaging system offline and notified affected users while working with law enforcement, including the FBI, and offering identity protection services.

Having recently surged to the top of the US Apple App Store with more than 4.6 million users, Tea has come under intense scrutiny over data privacy and storage practices. Critics have questioned why verification photos remained stored despite the company’s policy of deleting them post‑verification. Tea’s founder has expressed regret over the breach and reaffirmed the company’s commitment to enhancing security and preventing further exposure.

🔴Severe Data Protection Risk under UK GDPR & Potential Class Action

Tea’s retention and subsequent leak of over 72,000 user images and 1.1 million private messages raises serious compliance issues under the UK GDPR, particularly around data minimisation (Article 5(1)(c)), security (Article 32), and lawful basis for retention (Article 6). The failure to delete verification images, despite claiming to do so, could constitute a misrepresentation to users. Given the sensitive nature of the data – including discussions on abortion, rape, and personal identifiers – there is a high risk of regulatory enforcement by the ICO and potential group litigation by affected users, especially under the “distress without pecuniary loss” standard confirmed in Lloyd v Google. Cross-border impact may also trigger cooperation under the UK-EU GDPR frameworks.

 

🟡 Reputational and Consumer Protection Liability under UK Law

The breach could trigger scrutiny under the Consumer Protection from Unfair Trading Regulations 2008, particularly if users were misled about data handling and security. If the app promised anonymity and automatic deletion of sensitive material, regulators could investigate misleading omissions or practices. Although not a criminal offence in itself, reputational fallout could lead to civil claims, user attrition, and investor withdrawal. Tea’s status as a platform aimed at vulnerable users (e.g. women disclosing abuse) heightens the legal and ethical expectations around user safety and data integrity.

 

🟢 Opportunity to Rebuild with Stronger Security and Compliance Standards

While damaging, the breach presents an opportunity for Tea to adopt robust cyber resilience measures, pursue external audits, and implement Privacy by Design under Article 25 UK GDPR. If the platform swiftly cooperates with authorities, notifies affected users, and enhances transparency, it could rebuild trust and serve as a case study for best practice in post-breach recovery. Voluntary ICO engagement and alignment with standards like ISO/IEC 27001 could also mitigate regulatory risk and restore user confidence.

Categories: Weekly Download
Call Now
Get a Quote