Connect with us

Artificial intelligence

#AI assigned growing responsibility for #PrivacyCompliance

SHARE:

Published

on

We use your sign-up to provide content in ways you've consented to and to improve our understanding of you. You can unsubscribe at any time.

Though perhaps few realized it at the time, when new European rules regarding data handling took effect in 2018, the floodgates had opened. At that point in time, the General Data Protection Regulation (GDPR) was a novel attempt at forcing those who collect personal consumer data to store it securely and delete it promptly upon request of the data’s owner, writes Samuel Bocetta. 

That last bit is important. The GDPR was the first and still most credible public step towards defining ownership of consumer data. To the chagrin of online operations everywhere, GDPR put legal teeth into the idea that data was not a business asset but property of an individual. This was a seismic shift in thought at the time.

Fast forward a couple of years and data regulations are multiplying like bunnies at a Roman orgy. First, there was the California Consumer Privacy Act (CCPA), an Americanized version of the GDPR, though with a less vigorous fine schedule. And now a Gartner survey lists at least 60 jurisdictions from various corners of the world who are hard at work legislating their own data privacy rules. 

For website owners trying to make progress with their marketing, it’s hard enough to meet the requirements of one regulation, much less a herd of them. Every hour spent fiddling with data handling obligations is one less hour to devote to doing the things that actually contribute to business operations. If ever there was a catch-22, this is it.

A funny little thing called AI...

Unless, like that dude in the Geico commercial, you’ve been living under a rock, you’re probably aware that artificial intelligence (AI) has become a bit of a thing in technology circles. In school, at work, and online, individuals are learning about AI and its uses in droves. Sometimes used interchangeably with the term machine learning (ML), the idea is simple: feed an algorithm immense amounts of data. Before long, it “learns” to make intelligent decisions.

The reality is that AI/ML is already being put to work fighting fraud in various forms. Maybe you use cardless transactions to make payments. AI is involved and has been incorporated into many leading merchant service POS systems already. The heightened security of this mode of paying is possible because algorithms can sift through an immense amount of data quickly, looking for anything that appears not to be authentic and flag it.  

Advertisement

Though the power behind advancements like this makes some people queasy, especially those who have seen the movie Terminator 2, the idea of computers becoming self-aware is not as lunatic as it used to be. The good news is that the growing use of AI promises relief from tasks that were heretofore defined as drudgery or too complex for humans to efficiently handle. 

When it comes to trying to stay abreast of all the new privacy rules taking effect, we might have found something that AI is perfect for. In fact, the Gartner survey estimates that a full 40% of privacy compliance might be turned over to machines by 2023.

Subject rights requests are a big deal

While there are approximately one million and one things to pay attention to when it comes to data compliance, perhaps the biggest headache of all relates to Subject Rights Requests (SRR). This refers to the fact that those who collect individual personal data are required to respond within a time limit when an individual makes a data-related request, typically to see a copy of their information or to have it removed completely from a database.

If a site owner doesn’t reply to the request within a reasonable time frame, they can be fined. The problem is that responding to these requests is a massive undertaking for some companies. According to the survey, it can take 2-3 weeks for the data protection officer to deal with a single request. The average cost to do this is $1,400. 

This is time-consuming grunt work. Too bad there isn’t a data privacy tool that could do it. But wait, there is!

Next-generation AI-powered tools are allowing companies to finally catch up on their backlog by cranking through volumes of SRR in a fraction of the time.

So are data breaches

Where a company can really get into trouble with privacy regulators is in the realm of data breaches. The philosophy of GDPR and other legislation is that the responsibility to prevent breaches lies with the company that collects and stores the data. Fines for dereliction of duty can be massive. The GDPR allows for a penalty of either $20 million or 4% of a company’s annual revenue, whichever is greater.

And in case you thought that hackers had decided to have mercy on the internet, you’d be wrong. Statistics show that 60% of companies have had some sort of data breach since 2017. And a breach kicks in another personnel intensive task as the breached company is required to notify every person whose data might have been compromised of that fact within 72 hours of the breach.

One common method of breaching a database is hacking or phishing the password. For a while, about two seconds, it seemed that password manager software might be the answer to all our problems until a major security flaw came to light. Luckily it can be avoided. Meanwhile, hackers are working furiously to create sophisticated AI programs that have been shown can guess about a quarter of the 43 million active LinkedIn profiles. Of course, the good guys are using the same advanced algorithm technology to try and stop them. This is a battle that isn’t close to being won by either side yet.

Privacy tech AI - the next generation

 

Since GDPR, there has emerged a common theme among those charged with securing a website or database against hacker intrusion. They need help, especially in the area of privacy tools. Perhaps that help is arriving with the next generation of AI, though it’s not quite here yet. 

Those who have fallen madly, deeply in love with the idea of embedding AI into security tools like to ignore a recent test in which researchers managed to trick an AI-powered antivirus program into thinking malware was, in fact, GOodware simply by attaching a few random strings of code cadged from an online game. Not to throw the baby out with the bathwater, because online security should include a malware scanner as well as an antivirus suite. Just don’t fall for the idea that they are bulletproof just because the promotional material brags about AI inclusion. Not quite there yet.

For another AI use, let’s pay a visit to privacy policies.

We all love to carefully read the privacy policies associated with any new app we install, right? Sure you do. The mind-numbing verbiage is only exceeded in ridiculousness by the excessive length of the document. The typical response amongst humans is to speed scroll to the bottom and check the “accept” box as fast as possible. 

Currently, in beta-testing, there’s an algorithm-powered website that allows you to make suggestions in regard to privacy policies you’d like it to inspect. There’s no guarantee when it will get around to your suggestion, if at all, but be patient. It’s learning.

Though created for the consumer market, it could eventually be helpful to businesses also. Guard works by reading through a privacy policy presented to it, sentence by sentence, and alerting the user as to threats that could be posed to the user’s privacy. 

For now, you can visit the website and learn some very interesting information about some companies’ privacy policies. Twitter logged a terrible score of 15%, which earned it a D grade. In other words, CEO Jack Dorsey’s social media product is crap-awful at respecting user privacy and protecting data.

Use with caution!

Final thoughts 

The interesting thing about Guard is not that it’s especially helpful right now. Most of us already realize that Twitter is a bad public citizen when it comes to data protection. The interesting thing is that Guard is like a baby taking its first steps. We’re watching the process of teaching a machine about privacy concepts as it happens.

While on the site, you’ll be asked to take a few minutes to respond to a survey that presents questions based on a side-by-side comparison of privacy policy snippets and asks you to choose which best represents your concept of privacy. Each completed survey is a data point. Eventually, there will be millions and Guard will have a much better grasp on what privacy means to a human.

This is machine learning in action. Expect it, and other projects like it, to yield great rewards in the field of privacy compliance. For now, hang on until AI catches up with us.   

Share this article:

EU Reporter publishes articles from a variety of outside sources which express a wide range of viewpoints. The positions taken in these articles are not necessarily those of EU Reporter.

Trending