2024 presidential electionAPRAdata privacyFeaturedOpinionTechnology

This November, your data privacy will be on the ballot, too

This fall, voters will be contemplating a risky decision — and I’m not referring to the presidential candidates. I am talking about the real issue this election season: protecting voter data privacy.

With the Cambridge Analytica scandal still in recent memory, voters are concerned about the security of their personal information online. Ahead of this year’s election, which is happening in tandem with the rapid rise of artificial intelligence, voters must be aware that there are heightened risks to the security of their personal information. There are also steps that businesses can and should take to ensure they are protecting customer data from bad actors. 

During election seasons of years past, inboxes got fuller, campaign cold-calls ticked up and mass texts exploded. We all know this to be true. It’s an age-old challenge that dates back to before the AI boom.

Just think back to the November 2022 primary season. The Federal Election Commission approved a request from Google that allowed political senders to bypass spam filters on their way to Gmail inboxes. Essentially, a candidate, party, or political action committee could apply for the pilot program and, as long as its emails didn’t contain content prohibited by Gmail’s terms of service (phishing, malware, illegal content), they were accepted into the program.

Once that political sender was accepted, its emails would no longer be affected by forms of spam detection to which they’d otherwise be subjected. As a result, all of our inboxes started getting pretty darned full.

Now layer AI onto that.

Data from Invoca found that nearly 90 percent of respondents will have a dedicated budget for AI tools, and their investment in the technology will increase this year. This means marketers will tap into increased productivity, streamlined processes and more outreach. In a perfect world, these tools mean that marketers are investing in deeper personalization, so that voters only receive tailored content that aligns with their political preferences.

But this isn’t a perfect world, and there is ample room for human error, meaning our inboxes are about to be stuffed with unwanted political ads…probably more than ever before.

Along with a storm of political ads from well-meaning marketers, hackers and bad actors are also leveraging AI to ramp up their schemes this year. New data from McAfee found that 53 percent of consumers say AI has made it harder to spot online scams — particularly AI-generated deep fakes. The vast majority (72 percent) of American social media users also find it difficult to spot AI-generated content such as fake news.

Case in point: Ahead of the New Hampshire primary in January, a fake robocall of President Joe Biden made the rounds with voters. Using AI voice-cloning technologies, the falsified message urged Democratic voters to stay home, saying “Your vote makes a difference in November, not this Tuesday.”

But for businesses, AI also offers some seemingly positive implications. New channels are now available for marketers to reach customers, including virtual and augmented realities, facial recognition and precise location-ID. With these new means, there is a far greater volume of data that marketers can leverage to personalize customer outreach. This is, in theory, a positive impact of AI, but data privacy implications inevitably increase in parallel. The long-term success of AI-powered marketing is therefore contingent on a company’s ability to maintain a technological infrastructure that protects consumer data and brand reputation.

As a means to address these fears, legislators recently brought forward the bipartisan American Privacy Rights Act of 2024, which aims to establish the first-ever federal standard for comprehensive data privacy and security regulation. The bill would recognize individual data controls for consumers and related obligations across a wide range of corporations. These include the right to opt out of targeted advertising and certain algorithms — which, during an election year, offers the opportunity to significantly cut down on unwanted political advertising.

Regulations on a federal level could raise the stakes on the scale of fines and penalties, as well as the likelihood of enforcement. In this case, proposed enforcement is actually threefold, including the FTC, the states, and individuals through a private right of action. However, the fact that 93 percent of respondents to a recent study worry about the security of their online data indicates a widespread lack of confidence among users regarding the protection of their personal information.

Such a high level of concern could be attributed to various factors such as frequent data breaches, lack of transparency in data-handling practices, or increasing awareness of privacy-related issues. Addressing this concern is paramount to maintain user trust and confidence, as well as to comply with data protection regulations.

The unprecedented growth of AI is fueling the onslaught of political ads, coupled with a move toward the first-ever U.S. federal data privacy regulations. This will create a perfect storm ahead of the 2024 presidential election. To protect voters — in other words, their customers — businesses are morally and legally obligated to ensure the protection of their data, during the elections and beyond.

Nicky Watson is founder and chief architect of Cassie, a consent and preference management platform.

Source link

Related Posts

Load More Posts Loading...No More Posts.