Connect with us

Artificial intelligence

#AI assigned growing responsibility for #PrivacyCompliance

Guest contributor

Published

on

Though perhaps few realized it at the time, when new European rules regarding data handling took effect in 2018, the floodgates had opened. At that point in time, the General Data Protection Regulation (GDPR) was a novel attempt at forcing those who collect personal consumer data to store it securely and delete it promptly upon request of the data’s owner, writes Samuel Bocetta. 

That last bit is important. The GDPR was the first and still most credible public step towards defining ownership of consumer data. To the chagrin of online operations everywhere, GDPR put legal teeth into the idea that data was not a business asset but property of an individual. This was a seismic shift in thought at the time.

Fast forward a couple of years and data regulations are multiplying like bunnies at a Roman orgy. First, there was the California Consumer Privacy Act (CCPA), an Americanized version of the GDPR, though with a less vigorous fine schedule. And now a Gartner survey lists at least 60 jurisdictions from various corners of the world who are hard at work legislating their own data privacy rules. 

For website owners trying to make progress with their marketing, it’s hard enough to meet the requirements of one regulation, much less a herd of them. Every hour spent fiddling with data handling obligations is one less hour to devote to doing the things that actually contribute to business operations. If ever there was a catch-22, this is it.

A funny little thing called AI...

Unless, like that dude in the Geico commercial, you’ve been living under a rock, you’re probably aware that artificial intelligence (AI) has become a bit of a thing in technology circles. In school, at work, and online, individuals are learning about AI and its uses in droves. Sometimes used interchangeably with the term machine learning (ML), the idea is simple: feed an algorithm immense amounts of data. Before long, it “learns” to make intelligent decisions.

The reality is that AI/ML is already being put to work fighting fraud in various forms. Maybe you use cardless transactions to make payments. AI is involved and has been incorporated into many leading merchant service POS systems already. The heightened security of this mode of paying is possible because algorithms can sift through an immense amount of data quickly, looking for anything that appears not to be authentic and flag it.  

Though the power behind advancements like this makes some people queasy, especially those who have seen the movie Terminator 2, the idea of computers becoming self-aware is not as lunatic as it used to be. The good news is that the growing use of AI promises relief from tasks that were heretofore defined as drudgery or too complex for humans to efficiently handle. 

When it comes to trying to stay abreast of all the new privacy rules taking effect, we might have found something that AI is perfect for. In fact, the Gartner survey estimates that a full 40% of privacy compliance might be turned over to machines by 2023.

Subject rights requests are a big deal

While there are approximately one million and one things to pay attention to when it comes to data compliance, perhaps the biggest headache of all relates to Subject Rights Requests (SRR). This refers to the fact that those who collect individual personal data are required to respond within a time limit when an individual makes a data-related request, typically to see a copy of their information or to have it removed completely from a database.

If a site owner doesn’t reply to the request within a reasonable time frame, they can be fined. The problem is that responding to these requests is a massive undertaking for some companies. According to the survey, it can take 2-3 weeks for the data protection officer to deal with a single request. The average cost to do this is $1,400. 

This is time-consuming grunt work. Too bad there isn’t a data privacy tool that could do it. But wait, there is!

Next-generation AI-powered tools are allowing companies to finally catch up on their backlog by cranking through volumes of SRR in a fraction of the time.

So are data breaches

Where a company can really get into trouble with privacy regulators is in the realm of data breaches. The philosophy of GDPR and other legislation is that the responsibility to prevent breaches lies with the company that collects and stores the data. Fines for dereliction of duty can be massive. The GDPR allows for a penalty of either $20 million or 4% of a company’s annual revenue, whichever is greater.

And in case you thought that hackers had decided to have mercy on the internet, you’d be wrong. Statistics show that 60% of companies have had some sort of data breach since 2017. And a breach kicks in another personnel intensive task as the breached company is required to notify every person whose data might have been compromised of that fact within 72 hours of the breach.

One common method of breaching a database is hacking or phishing the password. For a while, about two seconds, it seemed that password manager software might be the answer to all our problems until a major security flaw came to light. Luckily it can be avoided. Meanwhile, hackers are working furiously to create sophisticated AI programs that have been shown can guess about a quarter of the 43 million active LinkedIn profiles. Of course, the good guys are using the same advanced algorithm technology to try and stop them. This is a battle that isn’t close to being won by either side yet.

Privacy tech AI - the next generation

 

Since GDPR, there has emerged a common theme among those charged with securing a website or database against hacker intrusion. They need help, especially in the area of privacy tools. Perhaps that help is arriving with the next generation of AI, though it’s not quite here yet. 

Those who have fallen madly, deeply in love with the idea of embedding AI into security tools like to ignore a recent test in which researchers managed to trick an AI-powered antivirus program into thinking malware was, in fact, GOodware simply by attaching a few random strings of code cadged from an online game. Not to throw the baby out with the bathwater, because online security should include a malware scanner as well as an antivirus suite. Just don’t fall for the idea that they are bulletproof just because the promotional material brags about AI inclusion. Not quite there yet.

For another AI use, let’s pay a visit to privacy policies.

We all love to carefully read the privacy policies associated with any new app we install, right? Sure you do. The mind-numbing verbiage is only exceeded in ridiculousness by the excessive length of the document. The typical response amongst humans is to speed scroll to the bottom and check the “accept” box as fast as possible. 

Currently, in beta-testing, there’s an algorithm-powered website that allows you to make suggestions in regard to privacy policies you’d like it to inspect. There’s no guarantee when it will get around to your suggestion, if at all, but be patient. It’s learning.

Though created for the consumer market, it could eventually be helpful to businesses also. Guard works by reading through a privacy policy presented to it, sentence by sentence, and alerting the user as to threats that could be posed to the user’s privacy. 

For now, you can visit the website and learn some very interesting information about some companies’ privacy policies. Twitter logged a terrible score of 15%, which earned it a D grade. In other words, CEO Jack Dorsey’s social media product is crap-awful at respecting user privacy and protecting data.

Use with caution!

Final thoughts 

The interesting thing about Guard is not that it’s especially helpful right now. Most of us already realize that Twitter is a bad public citizen when it comes to data protection. The interesting thing is that Guard is like a baby taking its first steps. We’re watching the process of teaching a machine about privacy concepts as it happens.

While on the site, you’ll be asked to take a few minutes to respond to a survey that presents questions based on a side-by-side comparison of privacy policy snippets and asks you to choose which best represents your concept of privacy. Each completed survey is a data point. Eventually, there will be millions and Guard will have a much better grasp on what privacy means to a human.

This is machine learning in action. Expect it, and other projects like it, to yield great rewards in the field of privacy compliance. For now, hang on until AI catches up with us.   

Artificial intelligence

EIT Health says AI vital to protect EU health systems

Catherine Feore

Published

on

On Wednesday (23 April) the European Commission presented new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The first-ever legal framework on AI aims to guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. 

A Europe fit for the Digital Age Executive Vice President Margrethe Vestager said: “On artificial intelligence, trust is a must, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

Internal Market Commissioner Thierry Breton said: “AI is a means, not an end. It has been around for decades but has reached new capacities fueled by computing power. Today's proposals aim to strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.” 

We spoke to Jan-Philipp Beck, CEO of EIT Health a ‘knowledge and innovation community’ (KIC) of the European Institute of Innovation and Technology (EIT). EIT Health has urged European healthcare providers to embrace AI and technology after the pandemic highlights fragility of healthcare systems.

The COVID-19 pandemic has accelerated adoption of AI in some areas, but broad impact remains sparse. EIT Health argues that advances in AI and technology can be of immense benefit to current healthcare systems and allow front-line workers to spend more time on patient care. A joint EIT Health and McKinsey report argues that AI automation could help alleviate workforce shortages, accelerate the research and developments of life-saving treatments, and help reduce the time spent on administrative tasks. Activities that currently occupy between 20-80% of doctor and nurse time could be streamlined or even eliminated by using AI.

EIT Health has launched a new AI report, outlining the urgent need for a post-pandemic technological revolution to prevent EU health systems from struggling over the next decade.

Jan-Philipp Beck said: “The outcomes of the AI think tank report has given us clear and consistent messages on how to drive AI and technology forward within European healthcare systems. We already know that AI has the potential to transform healthcare, but we need to work quickly and collaboratively to build it into current European healthcare structures.

“The challenge of the pandemic has undoubtedly helped accelerate growth, adoption and scaling of AI, as stakeholders have fought to deliver care both rapidly and remotely. However, this momentum needs to be maintained to ensure that benefits to healthcare systems are embedded long-term and help them to prepare for the future – something which will benefit all of us.”

Continue Reading

Artificial intelligence

Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence

EU Reporter Correspondent

Published

on

The Commission proposes new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. New rules on Machinery will complement this approach by adapting safety rules to increase users' trust in the new, versatile generation of products. A Europe fit for the Digital Age Executive Vice President Margrethe Vestager said: “On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.” Internal Market Commissioner Thierry Breton said: “AI is a means, not an end. It has been around for decades but has reached new capacities fueled by computing power. This offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism or cyber security. It also presents a number of risks. Today's proposals aim to strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.” For years, the Commission has been facilitating and enhancing cooperation on AI across the EU to boost its competitiveness and ensure trust based on EU values. The new AI regulation will make sure that Europeans can trust what AI has to offer. Proportionate and flexible rules will address the specific risks posed by AI systems and set the highest standard worldwide. The Coordinated Plan outlines the necessary policy changes and investment at member states level to strengthen Europe's leading position in the development of human-centric, sustainable, secure, inclusive and trustworthy AI. You will find more information on the press release, Q&A document and factpage, or by asking the chatbot.

Continue Reading

Artificial intelligence

European strategy for data: What Parliament wants

EU Reporter Correspondent

Published

on

Find out how MEPs want to shape the EU's rules for non-personal data sharing to boost innovation and the economy while protecting privacy. Data is at the heart of the EU's digital transformation that is influencing all aspects of society and the economy. It is necessary for the development of artificial intelligence, which is one of the EU's priorities, and presents significant opportunities for innovation, recovery after the Covid-19 crisis and growth, for example in health and green technologies.

Read more about big data opportunities and challenges.

Responding to the European Commission's European Strategy for Data, the Parliament called for legislation focussed on people based on European values of privacy and transparency that will enable Europeans and EU-based companies to benefit from the potential of industrial and public data in a report adopted on 25 March.

The benefits of an EU data economy

MEPs said that the crisis has shown the need for efficient data legislation that will support research and innovation. Large quantities of quality data, notably non-personal - industrial, public, and commercial - already exist in the EU and their full potential is yet to be explored. In the coming years, much more data will be generated. MEPs expect data legislation to help tap into this potential and make data available to European companies, including small and medium-sized enterprises, and researchers.

Enabling data flow between sectors and countries will help European businesses of all sizes to innovate and thrive in Europe and beyond and help establish the EU as a leader in the data economy.

The Commission projects that the data economy in the EU could grow from €301 billion in 2018 to €829 billion in 2025, with the number of data professionals rising from 5.7 to 10.9 million.

Europe's global competitors, such as the US and China, are innovating quickly and applying their ways of data access and use. To become a leader in the data economy, the EU should find a European way to unleash potential and set standards.

Rules to protect privacy, transparency and fundamental rights

MEPs said rules should be based on privacy, transparency and respect for fundamental rights. The frree sharing of data must be limited to non-personal data or irreversibly anonymised data. Individuals must be in full control of their data and be protected by EU data protection rules, notably the General Data Protection Regulation (GDPR).

The Parliament called on the Commission and EU countries to work with other countries on global standards to promote EU values and principles and ensure the Union’s market remains competitive.

European data spaces and big data infrastructure

Calling for the free flow of data to be the guiding principle, MEPs urged the Commission and EU countries to create sectoral data spaces that will enable the sharing of data while following common guidelines, legal requirements and protocols. In light of the pandemic, MEPs said that special attention should be given to the Common European Health Data Space.

As the success of the data strategy depends largely on information and communication technology infrastructure, MEPs called for accelerating technological developments in the EU, such as cybersecurity technology, optical fibres, 5G and 6G, and welcomed proposals to advance Europe's role in supercomputing and quantum computing. They warned that the digital divide between regions should be tackled to ensure equal possibilities, especially in light of the post-COVID recovery.

Environmental footprint of big data

While data has the potential to support green technologies and the EU's goal to become climate neutral by 2050, the digital sector is responsible for more than 2% of global greenhouse gas emissions. As it grows, it must focus on lowering its carbon footprint and reducing E-waste, MEPs said.

EU data sharing legislation

The Commission presented a European strategy for data in February 2020. The strategy and the White paper on artificial intelligence are the first pillars of the Commission's digital strategy.

Read more about artificial intelligence opportunities and what the Parliament wants.

The Parliaments expects the report to be taken into account in the new Data Act that the Commission will present in the second half of 2021.

Parliament is also working on a report on the Data Governance Act that the Commission presented in December 2020 as part of the strategy for data. It aims to increase data availability and strengthen trust in data sharing and in intermediaries.

A European strategy for data 

Data Governance Act: European data governance 

Continue Reading

Twitter

Facebook

Trending