Is privacy being traded away in the name of innovation and security?

Is privacy being traded away in the name of innovation and security?

Just a couple of weeks ago, International Privacy Day passed with the usual fanfare as companies, organizations, and governments seized the opportunity to push their sound bites highlighting the importance of making privacy paramount. But I see an irony in all the noise — much of it lip service to the need to boost vigilance in the face of the artificial intelligence boom — as many of these same entities are also asking people to trade privacy and sometimes security for convenience.

We’ve flogged the idea that generative AI is a game-changer to death (which it absolutely is) and we’ve discussed the need to harness its implementation within the rubric of protecting intellectual property. However, we don’t talk so much about how we go about protecting individual users who may not realize that every question asked, every scenario posited to the large language models (LLM) that drive generative AI is feeding the beast with that information. And will the new OpenAI GPT Store, which allows users to create their own iteration of ChatGPT, be exciting and dangerous at the same time?

“As exciting as this is for developers and the general public, this introduces new third-party risk since these distinct ‘GPTs’ don’t have the same levels of security and data privacy that ChatGPT does,” Nick Edwards, vice president of product management at Menlo Security, tells CSO. “As generative AI capabilities expand into third-party territory, users are facing muddy waters on where their data is going.”

Shadow AI — generative AI being used by members of an organization without the knowledge of IT professionals —  is already a reality. Are CISOs prepared for their users to jump on this bandwagon in increasing numbers?

Employee monitoring and surveillance is a potential privacy violation

My favorite topic (probably because I’ve been engaged in counterintelligence and counterespionage for more than 30 years) is insider risk management (IRM), which is a continuation of the spy craft skillset. Recent developments find me going back to the core principles of that training.

Fifteen to 20 years ago, working from home was a rare novelty, but in 2024 it is perfectly commonplace. Now, when putting together a “monitoring package” to protect an entity’s assets, does a CISO consult with the legal team? Is there a meeting of the minds and a clear delineation of where monitoring becomes surveillance? Are the individual’s privacy rights being trumped by the need to see what the employee is doing in the name of security?

Michael Brown, vice president of technology at Auvik, has it right in my opinion: “On one end of the spectrum, monitoring an employee’s every action provides deep visibility and potentially useful insights, but may violate an employee’s privacy. On the other hand, while a lack of monitoring protects the privacy of employee data, this choice could pose significant security and productivity risks for an organization. In most cases, neither extreme is the appropriate solution, and companies must identify an effective compromise that takes both visibility and privacy into account, allowing organizations to monitor their environments while ensuring that the privacy of certain personal employee data is respected.”

The key word in Brown’s observation is “compromise” and I am going to add “transparency.” Employees who understand why and how their engagement is being monitored, and how that monitoring may indeed turn into surveillance when probable cause exists, will have a greater understanding of the need to protect the entity as a whole by monitoring all who engage.

Collecting data comes with an obligation to protect data

The adage is that if you collect it, you must protect it. Every CISO knows this, and every instance where information is collected should have in place a means to protect that information. With this thought in mind, John A. Smith, founder and CSO of Conversant, proffered some thoughts which are easily embraceable:

  • Adhere to regulations and compliance requirements.
  • Understand that compliance isn’t enough.
  • Measure your secure controls against current threat actor behaviors.
  • Change your paradigms.
  • Remember that most breaches follow the same high-level pattern.

Smith’s comment about changing paradigms piqued my interest and his expansion is worthy of taking on board, as a different way of thinking. “Systems are generally open by default and closed by exception,” he tells CSO. “You should consider hardening systems by default and only opening access by exception. This paradigm change is particularly true in the context of data stores, such as practice management, electronic medical records, e-discovery, HRMS, and document management systems.”

“How data is protected, access controls are managed, and identity is orchestrated are critically important to the security of these systems. Cloud and SaaS are not inherently safe, because these systems are largely, by default, exposed to the public internet, and these applications are commonly not vetted with stringent security rigor.”

Limiting access to information can also feed security issues

Perhaps I am an anomaly, but when I go to a website and want to read an organization’s whitepapers or research and am asked to provide identifying information to do so, I tend to close the browser and move along. If I really am interested, and there is no other way to obtain it, I will begrudgingly fill out the form to get the download. If I have a generic web-based email account, I am often rejected with an admonishment that this information is only for those with proper “business” accounts. Marketing seems to stand between spreading knowledge and feeding a sales funnel.

Researchers and vendors putting research behind a registration wall limits the spread of that same information to those who might be able to put it to best use. I might be a freelance writer who isn’t going to buy your product, but I might write about the research if I could read it without compromising my privacy.

If I have one overarching message here, it’s that it is important to think of privacy beyond the individual user, to consider it within the context of your ecosystem and total population, from users to customers to partners and beyond. Compliance is important, yet as we know within the security world, being compliant does not equate to being secure. By the same token, being compliant in the privacy world doesn’t always ensure privacy.

Generative AI, Privacy, Security Practices

Go to Source