We read daily of cyberattacks and ransomware; is this leading to a decline in trust in using online services? How is this affecting individuals’ privacy? Can data really be protected online?
These are questions society as whole should answer in the age of mass social media and the rapid uptake of artificial intelligence.
Efforts to regulate the use of private information are not new. There are reams of legal texts on data governance and the right to privacy. The Privacy Act of 1974, the Health Insurance Portability and Accountability Act and the Children’s Online Privacy Protection Act are the main federal laws, with 20 states currently having some form of local data or privacy protection law.
Online service providers in the U.S. are generally protected from intermediary liability under Section 230 of the Federal Communications Act of 1934 as updated, as long as they take “reasonable steps” to delete or prevent access to illegal or unauthorized content. No such obligations are imposed on non-U.S. companies.
Acknowledging the importance of such laws, it is crucial to consider what they mean in practical terms now that over half the world’s Internet users have chosen to discard most of their remaining privacy online by embracing social media.
Most online services today ask ad nauseum for consent. But in a time-stressed society, we want immediate access to the information we are seeking, so we click “yes” or “agree” without reading reams of legal notices. Recently these notices have included consent for all data submitted to potentially be used for AI training purposes by large language models. Most data privacy laws focus a lot on consent; the problem, however, is that there is very little recourse withdrawing consent and ensuring all personal data has been erased — and in the case of data scraped for use by large language models, probably technically impossible.
Basically, this means that if you interact online, your privacy has been outsourced to corporations. But is it secure?
For example: In September 2024, Facebook’s parent, Meta, was fined $101 million by the Irish privacy regulator for storing user passwords in plain language as unencrypted text.
Fines may be unpleasant, but do they always have reputational issues? In this case, there has not been any reported mass-cancellation of users’ accounts.
Over a number of years, leaked data sets have included email addresses, names, phone numbers, credit card and bank information, medical records and additional personal information. Clearly, fines and threats alone are not preventing similar incidents of personal data exposure.
Yet technical solutions do exist, such as encryption, to mitigate damage from incidents, and are widely available for social media site owners and other firms that collect personal data online.
So if existing regulation doesn’t ensure data safety, what can be done? This is a precarious situation in the age of AI — where disinformation becomes undistinguishable from genuine information.
As AI is already permeating online services, misuse will further erode trust into the integrity of data protection. There are already cases of AI cloning voices of entertainers that are almost impossible for the lay person to distinguish from the real person, as well as bank fraud.
Perhaps we may learn from other regulated sectors?
From this perspective, the financial sector may provide some insight. They are tightly supervised through continuous oversight and regular audits, built on years of experience learning from events. Regulators can suspend a company, ban executives from holding senior positions and has the ultimate “nuclear” option; to shut down repeat transgressors by withdrawing their licenses to operate.
The same concept of “whole of society” protection logically can also be applied to online services. This means mandating service suppliers to comply with robust standards and continual monitoring, not just fining them when it goes wrong and accepting apologies.
This may be achieved by putting in a set of standards and regulations to ensure that data is curated using the most appropriate security options available. It may be based on the U.S. NIST technical standards that also address risk in the use of AI. More importantly, for suppliers of online services that are publicly listed, this may be made part of the audit requirements for stock market oversight.
Recent moves by the New York Stock Exchange and the U.S. SEC have drawn attention to cyber risks and the duties of board members of listed companies to take action. Penalties for non-compliance may include fining or barring of directors, temporary trading suspension or the ultimate option of delisting. For companies that are not listed, a licensing regime may be implemented, similar to the above with enforcement through license suspension or revocation.
There will of course be pushback; no one welcomes additional regulatory oversight. In my experience as regional director of public policy in the Asia–Pacific region for a U.S. Information and Communication Technologies business association, I advocated an overall light regulatory touch. The advent of AI and its ability to erode online trust means a light touch is no longer appropriate when it comes to data governance.
A start may be revising Section 230 so all online service providers have the same legal obligations and liabilities as other media companies, thus ensuring the activities of the largest online service firms are monitored and held to account the same way banks are.
Michael Mudd is a digital trade economist and an appointed IT standards expert to JTC-1 of the ISO. His career included senior technology management positions for an international bank and a major physical commodity trading company, before heading up the public policy office in Asia-Pacific for CompTIA, a U.S.-based IT industry business association.