Introduction
Every now and then I like to take a step back from the direct examples of automation which are typically showcased with the podcast guests and look at relevant connected issues to the topic of automation. This blog post will do just that and look at the larger topic of data privacy and protection. Data privacy especially has been getting a lot of attention in recent years with events like Edward Snowden’s whistleblowing, Facebook’s Cambridge Analytica scandal, and the numerous data breaches that impact millions of people every year. But more importantly, as I hope this podcast and blog have been showing, as our world becomes both more digitised and more automated, data is becoming more and more central to the way the world operates, and thus vulnerability in both our businesses and personal lives needs to be taken more seriously. I’m sure we have all been faced with a new app, software update, or website we want to use and instantly skip the lengthy, maybe even legally phrased user and data agreements. But I think that as time goes on and as we continue to use more digital and automation tools, the instant acceptance of these agreements will become more and more important to reconsider.
As I see it, there are really three levels of concerns, basic data theft, individual freedom or wellbeing, and more unnerving future implications. I’ll explore all three of these but first I think it’s important to set a bit of context.

Data As The New Oil Of The 21st Century.
The phrase was first coined in 2006, but back in 2017, The Economist published a story titled, “The world’s most valuable resource is no longer oil, but data” which popularised the concept.
As the titans of industry have shifted over the last 100+ years from oil producers and other raw material based organisations to the digital companies like Google, Facebook, Microsoft etc, data is now seen as the more valuable asset. And there is no sign that this will stop. We live in an ever increasing digital world, where the digital and real are blending constantly. The most obvious example is the smartphone revolution which has given nearly the entire planet’s population access to information and communication abilities that would be the envy of prior governments and kings. Smart cities, and smart homes are increasing our interaction with digital tools in our day to day lifestyle, requiring or even necessitating digital capabilities to get around, order and pay for goods, and even be involved in social events. And now we are seeing the encroachment of more immersive digital technologies like virtual reality or synthetic media entering both the home and office. Needless to say, our society and lives are becoming more defined by our digital footprint every year and the data that is generated is growing alongside this in both quantity and value. But as data use continues to grow, what are the risks? And why is data privacy and protection so important?

Basic Data Theft
Over the last decade billions of people saw their personal data stolen in data breaches from various companies and platforms. As systems and platforms become larger and larger a single breach can impact not a few million but hundreds of million of users at once. The most obvious problem relating to this is the revealing of credit card and identity information. But also depending on the compromised system lives can be negatively impacted in different ways. Take for example the breach of the extramarital affairs platform, Ashley Madison. It led to extortions, public humiliations, and even a few suicides had been linked to it and the subsequent data release. The popular data breaches typically impact individuals using a specific service, but as companies become more reliant on digital or automation tools having their operational data stolen can also lead to a reduced competitive edge.
However, more socially impactful was the watershed moment that really brought privacy issues to people’s awareness.

National and Citizen Issues
Cambridge Analytica, a data analysis firm, purchased Facebook data on tens of millions of Americans without their knowledge. The data in question was gathered using an app called thisisyourdigitallife, that offered Facebook users personality quizzes. Those who downloaded the app voluntarily turned over personal data about what they like, where they live, and in some cases, who their friends were. Of the 87 million people’s data that was collected, 30 million psychological profiles were created to build a tool which was used on US voters to help elect Donald Trump as president. With these profiles targeted advertisements could be used to suppress voting intentions.
A paper in 2013 warned of the dire consequences of such actions, essentially stating that they “pose a threat to an individual’s well-being, freedom, or even life.” In retrospect many have stated that by using Facebook’s data like this undermined the democratic system by stripping individual people of the ability to make unbiased decisions. In other words, by using data, profiles, and targeted advertisements, people’s decision to vote could be manipulated one way or another, benefiting and possibly leading to the election of the candidate and party that can access the most data. This gets more problematic if we zoom further out of the US context.
National Governments Can Use Data To Know Your Political Preferences
During the turmoil of the initial days of the pandemic when focus was put onto the virus, one example of the importance of data privacy was with Hungary, where authoritarian powers were granted to the sitting Prime Minister, silencing any criticism of the government by jailing those deemed to spread misinformation. As was voiced back in episode 26, the main fear that comes from this is that an authoritarian government not respecting privacy concerns can use data about individuals to ascertain political leanings or other ideological beliefs against the sitting power and using that information to negatively impacting individual’s lives. And this is relevant to everyone as we see the larger trend of the decline of democracy across the world, even before the pandemic hit.
“An annual country-by-country assessment of political rights and civil liberties released by Freedom House in March found that the share of countries designated Not Free has reached its highest level since the deterioration of democracy began in 2006, and that countries with declines in political rights and civil liberties outnumbered those with gains by the largest margin recorded during the 15-year period. The report downgraded the freedom scores of 73 countries, representing 75 percent of the global population”
So this is going to be problematic for most of the people on the planet, but something that has the potential to impact everyone who uses technology in the future might be of even greater concern.

Blending Man and Machine
The data discussed so far has been fairly peripheral to an individual’s identity compared to what is coming next. Part of today’s 4th industrial revolution holds that the lines between the physical, digital, and biological, are being blurred and will continue further into the future. We are beginning to see this with the increased use of personal bio trackers like fitbits, smart rings and watches, and even greater sensor capabilities in our smartphones. Furthermore the coming wave of biosensors that will track our minute body functions like blood pressure, weight and calorie levels, hormones, vitamins etc all in the name of preventative health will generate an entire new field of data with our most personal information. The benefits of this are of course astounding; early detection of cancer, monitoring glucose and insulin levels, long term heart rate monitoring, all of it can detect, diagnose, and prevent and treat diseases unlike before. But the risks are often less apparent. The same issues holds true for the more peripheral data collection discussed before, but now with biological information, it is possible that people will be reducing life and health choices to fit the quantified list of checkmarks needed for optimal performance, essentially losing touch with how to live life.
Secondly, health insurance systems will have much more power in their ability to know which clients will be costly and which won’t be. It is possible to see charging premiums to those people with higher probabilities of having health issues in later years, even though currently appearing perfectly fine, based on accumulated data. This would exacerbate social and economic inequalities as typically those with less personal wealth also deal with more health concerns.
This power could also be leveraged by other corporations which are already using hiring Artificial Intelligence to detect minute facial expressions in determining whether a candidate should get a job or not. Having access to personal health data (either through leaks like Cambridge Analytica, or in a more open and legal context) would again put potential employees at a further disadvantage. As organisations try to mitigate risk, and if a potential health complications could be known beforehand, employers might not be willing to take on potential health problems from candidates.

Final Thoughts
Lastly, when looking at the possible final frontier of this trend we arrive at a possibility existing more in the far future. Recently Wired published a story of a robotic arm that could twist, grasp and feel when being controlled by the operator. The catch was that the man had been completely paralysed since he was 19, and was controlling the robot simply by thinking about actions that were relayed to the robot through surgically implanted electrodes in his brain.
This isn’t new. Companies like Neuralink have been popularising brain computer interface (BCI) technology already for a few years. The problem is as simple as it is profound. Data can now be generated about what and how we are thinking. Though BCI still at a very early stage, the examples of the data breaches, cambridge analytica, changing national freedom levels, bio tracking issues, and hiring AI should put us at some level of unease with regards to generating more personal data without greater degrees of privacy and protection.
Thankfully there are initiatives today that are growing in response to events like Cambridge Analytica such as: Europe’s GDPR, Apple’s stronger stance on privacy features, California’s Consumer Rights Act, China’s new data security initiative and many more. In a 2020 report, Gartner projected that 65% of the world’s population will have its personal information covered under a privacy regulation by 2023, up from just 10% in early 2021. But even with this positive news I think it is still up to individuals and organisations to think about data privacy as technologies and automated digital tools enter our lives more and more.