By using this website , you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively.
Visit our Privacy Policy to find out more.
Thank you for the download!
Oops! Something went wrong while submitting the form.
Download Magazine

PriCyai Magazine

Interview with Professor Daniel J. Solove By PriCyai Magazine

In a thought-provoking interview with PriCyai Magazine on December 23, 2024, Professor Daniel J. Solove, a prominent expert in privacy law and data protection, shared his insights regarding the evolving privacy landscape, the challenges presented by artificial intelligence (AI), and the necessity for robust privacy regulations in the digital era. As a faculty member at George Washington University Law School, Professor Solove is widely recognized for his influential books, Understanding Privacy" and The Digital Person, which have significantly shaped contemporary discussions surrounding privacy and data security. His latest book, On Privacy and Technology, briefly overviews his thinking on privacy and AI. It will be published in February 2025. 

Professor Solove began by articulating that privacy encompasses more than just secrecy or the confidentiality of information and how it involves individuals retaining control over their data in an increasingly information-driven world. He pointed out that traditional privacy laws, originally designed to protect personal data through confidentiality, have become insufficient. Professor Solove emphasized that privacy should empower individuals to determine how their personal information is utilized and shared while safeguarding their autonomy in an era characterized by surveillance and data collection.

A key focus of the interview centered on the intersection of privacy and AI. Professor Solove discussed how AI and big data introduce new privacy risks, mainly as algorithms can analyze extensive amounts of personal data without explicit consent. He highlighted that AI systems, ranging from automated hiring tools to predictive policing, possess the potential to make biased decisions that compromise individuals' privacy and fairness. While these technologies are innovative, they raise critical questions regarding consent, transparency, and accountability. Solove argued that legal frameworks must evolve to address these emerging challenges and ensure that privacy protections keep pace with technological advancements.

Concerning data protection laws, Professor Solove underscored the significance of regulations such as the General Data Protection Regulation (GDPR) in Europe, which seeks to grant individuals greater control over their data. However, he cautioned that while the GDPR represents a notable advancement, it is not a comprehensive solution. Privacy laws must adapt to account for emerging technologies such as AI, the Internet of Things (IoT), and biometric surveillance. Solove stressed the necessity for a dynamic and global approach to privacy, acknowledging that data flows transcend borders and that privacy risks cannot be contained solely by national laws.

Looking to the future, Professor Solove expressed cautious optimism regarding the increasing public awareness of privacy issues. While individuals are becoming more conscious of how their data is utilized, he emphasized the importance of privacy advocates remaining vigilant. New technologies will continue to present complex privacy risks, necessitating proactive legal reforms to ensure privacy protections evolve with technological progress. Professor Solove advocated for ongoing dialogue among lawmakers, technologists, and the public to establish legal frameworks that balance innovation with individual rights.

Q1 -

You’ve been a leading voice in privacy law for years. How do you see the privacy landscape evolving with the rapid advancements in artificial intelligence and machine learning

Professor Daniel Solove -

Privacy has been on this trajectory for quite a while, long before AI captured the attention of the news media. The technologies behind AI are old technologies that are now supercharged because we have so much data, and computing power is much stronger. However, the privacy problems with AI and the privacy problems we are encountering today are an outgrowth of similar privacy problems we have experienced. Today, given the fact that there is so much data and such an extraordinary ability to collect, store, use, and transfer data, these privacy problems have become a lot more acute.
With each new technology, as we go about our lives, much of our data is being collected. Now, with modern computing power and the sophisticated algorithms used to analyze our data, much more about us can be inferred from that data. So, it becomes challenging to navigate this arena and decide what information they want to share with others or the world. It becomes almost impossible for an individual to control or determine that, so the law must improve how it addresses these problems significantly.

Q2 -

Can you share an example of a specific privacy case or legal development that has mainly shaped your thinking or research?

Professor Daniel Solove -

Instead of a case, I have a story. It involved a well-known incident at Target, a department store. Target decided to identify pregnant women early in their pregnancy, so they created an algorithm to determine and predict whether consumers were pregnant based on their purchases. The store then sent them ads for baby products.  Once, a father complained to the store because baby product ads kept being sent to his house, and he didn’t know why. It turned out that his teenage daughter was pregnant, but she hadn't told him yet. This incident happened in 2011, but it is still relevant today.  It shows how algorithms can infer sensitive data from fairly innocuous purchases. The information that led the algorithm to determine that people were pregnant involved purchases such as buying cotton balls or unscented products. People likely would never expect that when they purchase products like these, they might reveal much more about themselves, such as information about health and pregnancy. In today's world, keeping any information to yourself is hard. Even if you don’t reveal it, companies use AI and algorithms to figure out much more about you than you tell them. 

Q3 -

As you've mentioned, privacy concerns have taken on a new level of urgency. And as AI systems become more integrated into our daily lives, what are some of the most significant privacy risks you believe these technologies pose?

Professor Daniel Solove -

Well, they pose several risks. In my paper, Artificial Intelligence and Privacy, I discuss the many issues that AI affects. Gathering data for AI and scraping online presents many challenging and problematic privacy implications. These companies are gathering every piece of data that people have online that's available and using it. The companies think it's free, like the air, but I don't think it is, and it creates privacy problems. I wrote a paper about this issue with Professor Woodrow Hartzog – The Great Scrape: The Clash Between Scraping and Privacy.Also, we have issues where AI can make inferences about people to reveal things they don't expect to be revealed about themselves, as I mentioned in the previous question. AI is also used to predict how people might behave in the future, and these predictions are used to make critical decisions in people's lives. It could be determining if someone should get a job or deciding what someone's sentence should be. And these predictions are very problematic. I discuss these issues in a paper with Hideyuki Matsumi called The Prediction Society: AI and the Problems of Forecasting the Future. AI also supercharges surveillance. With surveillance we have a lot of surveillance cameras in the United States and everywhere, but you need people to look at them to figure out what's going on with the surveillance camera. So, surveillance capabilities are limited. But with AI, the AI can analyze what's going on in the footage and then single out certain instances based on what people want the AI to look for. So, that dramatically magnifies the impact of surveillance. All the problems of surveillance are enhanced and magnified to an exponential level. These are just a few examples. In general, AI enhances, magnifies, and increases the volume. It makes things much more difficult. 

Q4 -

Understandably, the government is not resting on this case; there have been regulatory reforms. How well are current data protection laws like the GDPR and the CCPA equipped to address the challenges brought by AI and big data analytics? Are there gaps? Are there gaps that need urgent attention?

Professor Daniel Solove -

The GDPR is the best and strongest privacy law and is not entirely up to the task. The law in the United States is a patchwork of many laws that vary in strength and coverage. There are a lot of gaps in that system; it is very complicated and relatively incoherent. So, it's much worse than the GDPR. Generally, when you look at privacy laws, they put way too much emphasis on individual control.Laws try to give people rights to access, correct, or delete their data or opt in or out of specific data uses. However, privacy laws rely far too heavily on these rights, which alone are far from enough to protect privacy. Most individuals don't read privacy notices; if they had, they would not have been very informative about what was happening.The big thing that people need to be able to do is to make a risk decision. When a person is asked to share data, the key question is: Are the benefits more significant than the costs? What is the potential harm?  I don't think people can make this decision.AI and data analysis are too complicated. Remember the Target story we just discussed? You might make innocuous purchases, and suddenly, an algorithm can determine something about your health that you didn't expect to be determined. How can people know what will be inferred about them without becoming computer scientists? There’s too much for individuals to learn to truly understand the risks of sharing their data with a company.  What does their privacy program look like? How good is their privacy officer? How good are their privacy impact assessments? How good is their data security? What do their vendor agreements look like? How are they sharing data among all the different vendors? And then what is the privacy protection at those vendors?  There’s just too much to know about how a company handles privacy for a person to meaningfully assess the risks of sharing their data.  The laws put way too much onus on individuals to determine their privacy.
The problem with privacy laws these days is that they rely so heavily on the idea that we can give people these rights and that they can control their privacy. That's not the right approach. Maintaining confidentiality and personal data is the right approach, but that doesn't mean individuals can manage this themselves.

Q5 -

Moving on to surveillance technologies. From facial recognition to predictive analytics, they are increasingly used by governments and private companies. What ethical considerations do you think should guide their development and use?

Professor Daniel Solove -

Many surveillance technologies are designed without privacy considerations and then unleashed onto the world where anyone can use them for any purpose. The government can use them for whatever it wants; the law barely says anything about it. More significant oversight, accountability, and a fundamental framework for using these technologies must be implemented. They carry a lot of dangers and risks that should be addressed. There should be designs for the technologies to prevent abuses and bad uses of these technologies, as well as designs to help facilitate oversight and accountability with these technologies, which unfortunately are not being built into those technologies. 

Q6 -

Given the growing concern over data breaches and cyber-attacks, what role should privacy laws play in strengthening cybersecurity frameworks? How can we balance the need for security with the right to privacy?

Professor Daniel Solove -

Cybersecurity incidents and data breaches occur constantly. Every year seems to be dubbed the year of the data breach because the situation worsens. The law takes the wrong approach. It focuses mainly on data breaches and punishes the companies that have them.Yes, the breached companies almost always could have done better. But many other players contributed to the violation. It takes a village to create a data breach.
Many actors contribute to a breach, and the law ignores most of them. It does not hold them accountable for their role, so the punishment is solely on the companies or organizations that have the breach. The marginal benefit of kicking them more is not that great. Many players throughout the system are making this problem much worse, and they get a pass. That's why we have so many data breaches: we're not holding everyone accountable.

Q7 -

You mentioned the concept of privacy by design earlier. This idea is often discussed in the context of technological innovation. What does it mean in practice, and how can companies integrate privacy protection into their AI and data collection systems from the outset?

Professor Daniel Solove -

Privacy by design means that technology is designed with privacy in mind. So, as it's being built, it is designed in ways that will better protect privacy. And a lot of laws now require privacy by design. A lot of companies say, hey, they are baking privacy into their designs. But their privacy conceptions are often impoverished. They are very incomplete. They view privacy as just a few small things but usually see it as data security, and they don't even know the difference between privacy and security. They'll throw in a few things about privacy and then say, look, we've designed for privacy. But the recipe is incomplete. It’s like making a pizza and forgetting the sauce and cheese. Privacy laws often don't provide enough guidance about what the recipe should be.
The law needs to be much more rigorous about designing for privacy. What should be considered? What is the recipe? We should ensure that companies address all the dimensions of privacy and all the potential problems rather than just a few. 

Q8 –

And talking about your works now, Professor Solove, you've written extensively about “The Digital Person.” How do you define this term, and what implications does it have for individual rights in the age of pervasive digital surveillance and data tracking?

Professor Daniel Solove -

The Digital Person is a book I wrote about 20 years ago. I used the term to describe the Information collected and assembled about us into dossiers, what I call digital dossiers about people. This information is essentially your digital twin—an attempt to capture who you are based on the information you give off throughout your daily activities. This digital person is partly factual, partly fictional, and problematic in both dimensions. It's problematic to the extent that it is factual and that many things about an individual are known to those organizations collecting all this information. Decisions are being made based on this digital person and this information about them that profoundly affects people's lives, and people have very little power to do anything about it. There's also the problem of it being partly fictional. The digital person is not 100% true. It attempts to understand a person from their data, but it is incomplete because it simplifies people.People are unique and different, and information about us doesn't always capture who we are. However, decisions are made based on that profile, which can have harmful effects if the information is wrong or if people are judged based on incomplete information that fails to capture who they are.So, there are problems when the information is correct and accurate and when it is not.

Q9 -

Moving on to AI systems. As AI systems become more autonomous, do you foresee a future where the legal responsibility for privacy violations could shift from human actors to machines or algorithms? How do you think the law will need to adapt?

Professor Daniel Solove -

I'm skeptical that AI can solve all privacy problems or that we can achieve excellent privacy automation. Many privacy issues involve difficult contextual judgment calls. Sometimes, AI can do a good job and see things we don't see, but it can often not make judgments like humans.  Although AI attempts to simulate how humans think, it works quite differently. While AI can assist in protecting privacy, I am cautious about delegating too much power to it or placing too much hope that it will solve all the problems with a button. It's not that easy.

Q10 -

Given your background in privacy, law, and technology, what is your perspective on the intersection of ethics, law, and AI? How can we ensure that AI technologies are developed and deployed in a way that respects privacy and human dignity?

Professor Daniel Solove -

The best way to do this is to think about these things comprehensively and thoughtfully, addressing the problems in full complexity and nuance. This involves a much more robust understanding of these issues, and the law should strive to embrace the existing knowledge and expertise.
Unfortunately, I often see privacy laws passed that use approaches or specific measures known not to work. Many great articles and books have made compelling arguments about why these measures don’t work. It's frustrating to see a law passed and legislators pat themselves on the back, saying, “Look at this great thing we did,” when they didn't even bother to read the works that show their law won’t work. It frustrates me when courts dismiss cases, saying there is no privacy harm, when there is compelling literature showing why there is harm. Some outstanding thinkers on privacy are great scholars and practitioners who have thought deeply about these issues. I’m not asking policymakers to agree with all this work, but from what I see, it is clear that they are not bothering to read it.

Q11 -

And still on policy, what role do you think public policy regulation and self-regulation should play, particularly self-regulation, should play in shaping the future of privacy in the digital age? And how can we strike the right balance between innovation and protection in this regard?

Professor Daniel Solove -

Self-regulation does not work. It's like asking the fox to guard the henhouse or telling a shark to please become a vegetarian. Companies respond to incentives. In today's information economy, money is made by monetizing people's data, so companies are incentivized to gather and use a lot of personal data. So, relying on self-regulation is like asking a shark to go on a diet, which doesn't work. The incentives need to be changed. Companies can't regulate themselves.
And, even if a few companies are good apples, there are always the bad apples. Throughout history, industries have always claimed to be able to regulate themselves. However, history has shown that self-regulation has never worked. Thus, policymakers are continually fooled by this canard when companies propose self-regulation today with privacy.

Q12 -

All right, to the counterargument now. If, from your standpoint, self-regulation is not the way forward now, how can we ensure that AI technologies are developed and deployed in a way that respects privacy and human dignity by striking the right balance between innovation and protection?

Professor Daniel Solove –

I think we need real rigorous regulation and accountability.
We need ex-ante and ex-post. We need regulators to look out for problems in advance. Ex-post, if you create AI or technologies that harm people, you need to pay for the harm you created. This creates an incentive to be careful. If you set the incentives right, companies can make safer products. We demand that companies make safe cars. You can't put out an unsafe vehicle. We have regulators making sure that you can't. And ex-post, if a defective, dangerous car causes harm, the manufacturer can be sued and held responsible. These days, though, we have this weird exceptionalism with technology, where if it's technology, we say it's too complicated for us to tell companies what they should do. We should just let them do whatever they want, and then when they cause harm, the law often tries to find every way to give companies a free pass because it is important they keep innovating.  Imagine if this were done with cars. It would be absurd to let the car company do whatever it wants. If it causes harm, do not do anything because holding the company accountable will inhibit its ability to innovate.

Q13 -

As an academic and practitioner, what advice would you give to companies striving to navigate the complexities of data privacy laws while innovating with AI?

Professor Daniel Solove -

I think having an excellent privacy team and giving them the resources they need to do their jobs would be great. However, there's no easy button on this issue. Companies need to hire competent privacy professionals who will spend their time examining the issues and reading the literature. There is a lot of great, practical, thoughtful information about potential harms, solutions, and other topics. We need people to analyze this stuff that just can't readily be ingested into some automated system and spit out with some easy answer. These are very tricky ethical questions, very tricky questions of balancing and values. We need humans who are thoughtful and steeped in the humanities to think through these issues. Unfortunately, the privacy teams of many companies are understaffed and under-resourced. They're not getting the budget they need. They don't have the staff they need. They don't have the power and clout they need in the organization. They're seen as a cost center. They are barely given much of a budget. But their jobs are incredibly complex. Look at all the new laws that are being passed. Now, a privacy officer needs to consider not just the GDPR but all the different state laws, subject matter laws, health privacy laws, and biometric information privacy laws. They need to keep track of all the AI laws out there. All of them haven't seen their budgets go up that much, but their responsibilities and the amount of information they need to master and know have gone up dramatically. So, the time has come for a reckoning. Privacy officers need resources, power, and clout to do this job well. 

Q14 –

When we started this discussion, we looked back 10 years ago.
What will be the most critical legal and ethical challenges in privacy law and cybersecurity over the next decade?

Professor Daniel Solove -

I cannot point to one thing. It's everything, as I believe it's all related. Privacy and security are interrelated, and confidentiality has many dimensions. One of the biggest problems is the struggle to deal with all these dimensions; that makes it so hard. If there were one silver bullet, one easy thing, it would be easy to solve. We would have solved it by now.What makes it so hard is that everything is interrelated. Weak privacy means weak security. Weak security means weak privacy. You have to master AI and all the different problems that it causes, as well as all the other privacy problems out there—designing for things when companies lack a conception of privacy or have an incomplete conception. The biggest challenge is dealing with the complexity of this bundle of different issues thoughtfully. 

Q15 -

Earlier on, you had indicated that leaving the privacy issue in the consumers' hands tends to get complicated rather than intending to resolve the issues we are discussing. What steps can be taken to improve public understanding of data privacy issues in an increasingly digital world of ours?

Professor Daniel Solove -

I hope that my work and that of other experts in the field help people understand what is happening. My short book, On Privacy and Technology, will be released in February 2025. The book explains what's happening and how to think about the problems. I hope people will learn about what's going on if they read the book and the many other great things written by others. The best thing an individual can do is demand that policymakers enact meaningful protections. Right now, what we're getting in the law is not that. Instead, we’re getting old ideas that don’t work. People cry out for meaningful privacy protection but are given warmed-over broth. People must return it to the kitchen and say, "We need you to pass something real.”

Q16 –

Congratulations on the most recent book in February 2025. You've authored numerous books on privacy and data security. What do you hope readers take away from your work, particularly when understanding the balance between privacy and security innovation?

Professor Daniel Solove -

it's hard to boil it down to one point. It's taken me many books to explore this issue.  But, at least one takeaway is that individuals can't protect privacy alone. Many laws and companies say that if you care about your privacy and do all these things, you'll be okay. My answer is no, you're not going to be okay. It's an unsettling answer, but it's the truth. They want you to think that if you opt out of all these privacy notices, access the data, write to thousands of companies, and spend a ridiculous amount of time, you'll somehow protect your privacy. And if you don't do it, you're to blame. But you’re not to blame. The onus shouldn't be on individuals to protect their privacy. The onus should be on the companies collecting and using this data, reaping tremendous profits and benefits.
Companies have chosen to use your data and make a profit, but they don't care about the harm they're creating and don't want to be held accountable. That's really what it's about. 
We need to demand accountability for the harm companies cause when they collect and use personal data. And if we hold them accountable, guess what? They will make their products and services less risky and harmful.  When the law finally demanded that car companies make safer cars, guess what happened? The vehicles become safer. With technology and privacy, it's possible to innovate, not just to make money and a profit, but to protect privacy and make safe products that don't harm people. But the law currently isn’t asking them to do it.  

Thank you.

Visit www.danielsolove.com for more about Professor Daniel Solove.