Mr. Ronald Sarian is the Global Chief Privacy Officer at Ingram Micro and an Adjunct Professor of Information Privacy at USC Gould School of Law. With over two decades of experience as a top-rated business litigator, he transitioned to a corporate role, serving as General Counsel at eHarmony where he insured global data privacy compliance and implemented best practices cybersecurity measures. After eHarmony’s acquisition, Ron became General Counsel at blockchain company OilR, advising on intellectual property, SEC issues, and capital raising. In 2019, he joined Constangy, Brooks, Smith & Prophete, where he led their Digital Workplace and Data Privacy Practice Group. Mr. In 2022, Ronald Sarian joined Ingram Micro, a public multinational IT company,, overseeing global data privacy compliance. As a recognized expert, he frequently speaks at industry events, including RSA Conference 2024, and has lectured on privacy, cybersecurity, and information governance.
“The views and opinions expressed by Mr. Ronald Sarian during this interview are his own and do not reflect the views or positions of his employer or the company he represents. Mr. Sarian speaks solely in his capacity and any statements made should not be interpreted as official commentary on behalf of his company.”
In our interview with Mr. Ronald Sarian on February 9, 2025, he emphasizes the critical balance between robust cybersecurity and privacy. He notes that cybersecurity should take precedence due to the greater risk of breaches, but both must be prioritized equally. He stresses the importance of proactive measures, such as comprehensive privacy and cybersecurity programs, employee training, and using tools like data privacy impact assessments (DPIAs), which must be turbocharged when AI is involved. Through continuous reading and monitoring legislative developments and case law, Mr. Sarian also highlights the importance of staying updated on evolving regulations, like the GDPR and CCPA. He advises businesses to ensure compliance with the EU’s AI Act for AI systems, focusing on risk assessment, transparency, and human oversight.
On cybersecurity for remote work, Mr. Sarian recommends using company devices, VPNs, and employee training to prevent phishing and malware. He advocates for clear data protection agreements and indemnity clauses for third-party cloud providers to mitigate risks. With the rise of deepfakes, Mr. Sarian suggests businesses implement multi-factor authentication and verify suspicious communications. Looking ahead, he sees increased enforcement of privacy laws and the challenges posed by quantum computing, urging companies to stay informed and adopt strong cybersecurity practices to address emerging risks.
Q1 -
Given high-profile breaches such as the AT&T, MOVEit, and Ticketmaster incidents, how would you recommend organizations balance the need for robust cybersecurity with strict data privacy practices?
Ronald Sarian, Esq
I recall that AT&T and Ticketmaster were cases of stolen credentials from subcontractors. MOVEit was a big deal; it was a zero-day attack. Cybersecurity and privacy needs are vital, and both should be upfront and require maximum resources. They're two sides of the same coin. You can't have privacy without cybersecurity, but you need cybersecurity to start any analysis. So, I recommend having robust cybersecurity and the appropriate privacy disclosures and protocols in place. I don't know how to balance them, as I think both are top priorities, with cybersecurity taking precedence in my opinion due to the greater exposure for data breaches. In contrast, privacy exposure typically depends on the laws in your location. For instance, privacy exposure is most significant in Europe, where real fines and penalties exist. That’s an issue. Also, cybersecurity is essential with class actions due to major data breaches. We shouldn't sacrifice one for the other.
Q2 -
Many companies respond reactively to privacy and security incidents. What proactive measures can organizations take now to prevent future breaches and data misuse, particularly in the context of AI?
Ronald Sarian, Esq
Yes, you need comprehensive privacy and cybersecurity programs with all the correct protocols. For a privacy program, it’s essential to start with training employees. They first need to understand personal information, which should be part of the training. Once they grasp this, and from a privacy perspective, whenever a new design is being developed or you are considering a third-party solution for a specific problem, it’s crucial to have a thorough process to assess whether you have adequate privacy protections.
Everything should begin with a Data Privacy Impact Assessment questionnaire. While it's somewhat standardized, the contents of that questionnaire are vital to determine the privacy impact of the solution. This questionnaire should be distributed to the relevant stakeholders within the company, first asking whether any personal information will be collected. If so, it should address what type of personal information is being collected, in what area, and who the data subjects are. It should also inquire whether the information is sensitive, such as social security numbers of health data. These questions are included in the DPIA. Additionally, you need to determine if the information collected pertains to European residents and whether there will be any data transfer outside of Europe to the United States or to countries lacking an adequate decision. Such questions should be incorporated into the DPIA. Moreover, if the solution involves artificial intelligence, it necessitates additional questions aligned with the AI Act in Europe, which is considered the gold standard, similar to how the GDPR is regarded for privacy.
The EU AI Act should be well understood. You should ask additional questions regarding AI, such as whether it involves profiling, automated decision-making, or other activities like sentiment analysis or social scoring, both of which may be prohibited. High-risk activities, such as using AI for recruitment or processing resumes, also have risks. High-risk activities are acceptable if you have the proper protocols in place, and the AI Act outlines specific protocols for high-risk processing.
Therefore, all these procedures must be established. If this is your case, your privacy department should assess the AI. They need to address all the high-risk questions outlined by the AI Act. This applies whether you’re designing in-house or relying on a third party; the requirements under the AI Act will differ based on your role as a provider, deployer, distributor, or importer. Should you be the provider, for example, you must ensure a risk management system, conduct an appropriate privacy review, guarantee data quality, and maintain the correct technical documentation. Human oversight might also be necessary. You must ensure robust cybersecurity measures, accuracy in data handling, and maintenance protocols. With all these measures in place, you may need to register and monitor the application.
Therefore, during your initial review of the privacy impacts of the solution, you need to raise all those questions and act accordingly, using the AI Act in the EU to the fullest extent possible. I apologize for being a bit long-winded. These are all crucial considerations when AI is involved, in addition to the usual factors regarding personal information.
Q3 -
With data privacy laws like GDPR and CCPA constantly evolving, how can organizations ensure global compliance, and what steps can they take to stay ahead of these regulatory changes?
Ronald Sarian, Esq
Read, read, read—be a dedicated reader.
You must subscribe to as many publications as possible to stay updated on what's developing. If you're not keeping up, you're missing out on the constantly evolving areas of privacy and cybersecurity law. These fields are among the most rapidly changing and fascinating areas of law. Every day, I read various periodicals and IAPP resources. There are countless materials available about current trends in privacy.
Everyone involved with privacy issues should take it upon themselves to read extensively because that's the only way to keep abreast of ongoing developments. As such, I review daily what's happening, what cases are emerging, and which rulings on specific privacy laws may be relevant. For instance, in California, we are witnessing an influx of new cases where aggressive class-action lawyers are attempting to navigate around privacy law, particularly focusing on wiretapping statutes. In California, privacy-related wiretapping cases often allege that chatbots did not disclose they were recording or transcribing conversations and California is a two-party consent state. In other words, even though under the CCPA consent can be implied, they argue thatexplicit consent must be obtained before engaging with the chatbot. That is the argument, though it should not prevail for a number of reasons. We are still in the early stages of this issue, but some intriguing arguments are being presented, with many ongoing cases beinglitigated. It’s interesting to see this outside-the-box thinking. Awareness of which way the wind is blowing is therefore crucial, and it is essential to read all relevant materials. Lexology is another source I frequent; in a recentllawsuit the defendant's motion for summary judgment was denied, allowing the case to proceed. So you have to ask yourself, what can we learn from this? What steps should we take to protect ourselves from potential liabilities?
Q4 -
Privacy by Design is essential in today’s data-driven world. How can businesses incorporate this principle from the beginning when creating AI systems that manage sensitive personal information?
Ronald Sarian, Esq
Well, as I referenced earlier, whenever personal information is involved you must start with a data privacy impact assessment questionnaire to identify issues regarding privacy and AI. and fully understand the situation. It’s crucial to clearly grasp what type of personal information will be involved. If it's sensitive information, you may have to adhere to stricter cybersecurity requirements. It's up to the privacy team within the organization to maintain appropriate documentation, starting with the DPIAand then on to a Data Protection Agreement..
Again, that's a GDPR requirement. California’s CCPA has similar requirements, but instead of referring to the parties as a “Controller” and a “Processor,” it is a “Business” and a “Service Provider.” However, it's usually better for a global organization to establish a consistent standard, even if some regions do not require it, as it serves a larger purpose. Therefore, having a formal Data Protection Agreement is beneficial whenever reasonably appropriate, where you'll define who the controller and processor are. It could also involve a situation with two independent controllers, joint controllers, or a processor to a controller. Several issues arise here that the organization's privacy professionals must analyze. You need to engage stakeholders in the company and ask for more information. Request the actual contract or Master Services Agreement so you can understand every detail regarding the use of personal information and ensure the appropriate documentation is in place.
Once you have sorted out the Data Protection Agreement, another consideration arises: whether data is being transferred out of Europe to a country that lacks an adequacy decision, such as the United States. If your company isn’t certified under the EU-US Data Privacy Framework, you must implement Standard Contractual Clauses, and the EU establishes four versions of these, depending on the actual roles of the parties.. The SCCs are something you really can't change, thoughcertain elements must be included, such as the jurisdictional law you want to apply and the competent supervisory authority in case of a dispute or issue. A Transfer Impact Assessment is required if data will be processed outside Europe or at another location. These procedures should be applied consistently across all operations. Maintain standardization based upon GDPR to the greatest extent possible to reduce the worry and simplify.. It’s a lot of work but the most effective approach.
Q5 -
The EU’s AI Act promotes greater transparency as AI becomes more integrated into business operations. How can businesses ensure their AI systems comply with upcoming regulations and ethical standards?
Ronald Sarian, Esq
The EU AI Act adopts a risk-based approach with different risk categories. The unacceptable risk category indicates that specific AI systems cannot be used because Europea considers them a direct threat to fundamental rights, including social scoring or manipulating behavior through biometric data and analysis. The AI Act clearly outlines these concerns under Chapter 2, Article 5, detailing specific systems like certain types of facial recognition that fall under this unacceptable risk category. If an AI system appears there, it is illegal.
Next, there is the high-risk category. For example, using AI in recruitment to analyze resumes must avoid discrimination, which can be challenging. Similarly, systems for employee monitoring also fall within this high-risk dimension. To determine if an AI system is high risk, you begin with aDPIA. If something qualifies as high risk, you must consult the AI Act to decide the necessary actions. This will vary based on your role as a provider, deployer, or distributor, but if you're a provider or deployer, you must adhere to all the requirements I mentioned earlier. Review all those aspects that are outlined. Can you document your risk analysis? Are you ensuring quality assessment? Are you adequately disclosing what the AI does? Is human oversight provided where necessary? Do you have strong cybersecurity measures in place? Are you monitoring post-market registration requirements? These elements are all detailed in the AI Act.
The third category is limited risk, such as with chatbots. This involves transparency risk, meaning you should usually inform users if they’re engaging with AI-driven technology. Finally, there's low risk, such as text generation. For instance, tools like ChatGPT. Spam filters and similar applications require minimal oversight but should be recognizable to users as AI-driven.

Q6 -
The transition to remote work has introduced new cybersecurity and privacy risks, particularly when utilizing cloud platforms. What are the essential privacy and security strategies that businesses should embrace for hybrid work environments?
Ronald Sarian, Esq
After COVID, when everyone worked from home, it became a heyday for hackers. It's essential to have numerous safeguards in place when working from home because there are many vulnerabilities.. One crucial safeguard is to always use company equipment as the company typically installs its security software on its devices. Don't conduct business through your personal laptop or phone, Use a VPN whenever possible.
Phishing is a significant area where cybercriminals steal credentials or plant malware in a system. The best way to combat phishing is to provide adequate and ongoing training to your employees. Identifying a phishing attack is crucial, whether it occurs at home or at work. Many other issues can arise when workingremotely. Avoid using someone else's Wi-Fi; stick to your own. Even when using a VPN, I advise against connecting to open Wi-Fi networks. Turn off Wi-Fi to avoid accidentally, automatically connecting to an unknown WiFi network. Using open Wi-Fi is just a big risk. Use a privacy screen so that someone sitting next to you cannot view your screen. Failure to take these precautions when working remotely give hackers a substantial advantage. Businesses lost a bit of control duringCOVID, but now that employees are coming back to the office, there is more oversight. You're using office systems and Wi-Fi, which are private lines with appropriate security measures.
Q7 -
With organizations increasingly relying on third-party cloud providers, how can they ensure these partners meet their privacy and security obligations? What should be included in contracts to protect against breaches?
Ronald Sarian, Esq
Returning to my earlier point, after you conduct a DPIA using the questionnaire, you should consult with stakeholders, and it is the responsibility of the privacy department to establish the necessary agreements. In addition to the contract, you should have a distinct Data Protection Agreement in place outlining all details, including the proper indemnity. Often, there will be indemnity agreements when utilizing an external provider. Indemnity agreements may be included in the Master Services Agreement. However, when addressing privacy issues, it's common to for the Controller to require a higher cap if you are dealing with numerous data subjects or sensitive information; this is an important consideration.
Thus, you want to ensure that if an incident occurs, such as a hack of the cloud provider, they have the appropriate indemnity in place. As a controller, you could face repercussions from data authorities in Europe, which may result in significant fines. As you may have seen, penalties in Europe can reach billions of dollars, these fines have primarily affected B2C companies. B2B companies are somewhat less vulnerablebut they should still implement all appropriate privacy protections.
In the case of privacy violations, a Controller might want a super cap on indemnity because the indemnifying party (the one at fault) usually seeks to impose a cap. From a privacy perspective, a Controller should aim for the highest possible cap,if there is a cap at all. As I mentioned earlier, when transferring data outside the jurisdiction of the data subjects to areas lacking adequate protection, you must ensure that Standard Contractual Clauses are in place. Additionally, a concurrent Transfer Impact Assessment is necessary before the transfer; while this process is somewhat different than a DPIA, it is similar in many respects. You need all these measures so that if issues arise and a regulator scrutinizes the transaction, you can promptly present the documentation, demonstrating that you fulfilled your obligations and acted in good faith while employing best practices. However, if God forbid things do go awry, in the event of a fine it is likely to be smaller since you have done your due diligence and have the appropriate processes in place. This documentation falls under the privacy umbrella and is not strictly about cybersecurity.
It is advisable to have a separate Cybersecurity Agreement in addition to the Data Protection Agreement, alongside the main contract or the Master Services Agreement. If you have a distinct Infosec department, it should review all the technical cybersecurity aspects of the cloud provider or processor, including all necessary cybersecurity protections they employ as well as certifications they hold.
Q8 -
With the rise of AI-generated deepfakes, how can businesses protect themselves and their customers from identity fraud and misinformation, particularly in industries like finance and healthcare?
Ronald Sarian, Esq
I'm not in the finance or healthcare industries, but businesses should generally ensure that anyone logging into their systems for transactions goes through multi-factor authentication. This is essential. They must also have the appropriate protocols, such as routinely patching software and implementing relevant programs to ensure proper cybersecurity measures. As I’ve said, it's also essential to train employees to spot phishing attacks. You can do many other things, like using password management, creating long passwords, or even adopting passphrases, which are much safer. Moreover, training should include not clicking on anything that seems off or fake, especially being aware of deepfakes. For example, suppose a supervisor appears to be asking for something unusual. In that case, it's critical to have appropriate training in place so that employees feel encouraged to check in person with their supervisor or send an email to confirm if they sent a specific message. Deepfakes are serious threats, and their sophistication is increasing with AI.
As an individual, you must understand phishing and take steps to minimize risks. Limit your posting on social media; people share all sorts of information and images that can be exploited. Restrict access to your social media accounts whenever possible. Additionally, ensure your computers are patched, be aware of potential attacks, and avoid clicking on suspicious links or texts. A common tactic is receiving a message like, “You've overdrawn your checking account,” prompting you to click a link to resolve the issue. Clicking that link could lead to credential theft or malware installation. There’s a lot you can do to prevent these attacks.
Q9 -
With emerging technologies like quantum computing on the horizon, what privacy and security challenges do you foresee, and how can companies prepare for these risks today?
Ronald Sarian, Esq
Computing could potentially crack encryption and disrupt blockchains, so the possibilities are limitless. We don't fully understand everything. We don't know what we don't know, but today, things we believed impossible five years ago are becoming realities. I can't provide any real advice on quantum computing. You should read voraciously about it and try to gauge its direction. You need to make informed decisions about how it might exploit vulnerabilities. There are significant concerns regarding privacy and cybersecurity, as quantum computing could potentially compromise encryption. I can't tell you how to solve that, and I wish I could offer meaningful guidance, but the possibilities are so vast that all we can do is keep up with current developments. The security measures you implement today must be based on what's available now, considering the potential for zero-day vulnerabilities linked to quantum computing or AI. You must have robust cybersecurity procedures, clearschematics, and effective logging, so if an incident occurs, you can trace it back and hopefully mitigate it immediately. You'll also need to determine if there's a need to report incidents if someone's personal information has been compromised.You must constantly assess the significant privacy and cybersecurity challenges businesses face.
Q10 -
What are the most significant privacy and cybersecurity challenges businesses will face over the next 3-5 years, especially with the rapid advancement of AI and other technologies?
Ronald Sarian, Esq
So, the key is to read daily and stay updated on all changes. However, we should all recognize that enforcement in Europe and globally will increase as people become more aware of their privacy rights. Subject access requests are growing; people are becoming informed, leading to more lawsuits, class actions, and similar occurrences. Thus, strict adherence to existing data privacy laws as they emerge is required, as well as keeping a close watch on developments in privacy and cybersecurity. That's all I can forecast for the next three to five years—we know that privacy, along with the acknowledgment and awareness of it, will increase dramatically. Authorities are becoming more aggressive in this area. While our sanctions regimes differ from theirs, many states in the United States are developing their privacy laws, resulting in heightened enforcement.
California has the California Privacy Protection Agency, which enforces the CCPA. With enforcement increasing, the best way to adequately protect your business is to ensure you're aware of everything being enacted. I’m unsure how else to emphasize the importance of protecting yourself other than to say you must stay informed about ongoing developments and anticipate what you’ll need to address in six months based on current laws and legislative discussions. This involves a lot of reading.