Disruptive Technologies

 

After a long week of assisting customers with PCI DSS compliance, I found myself at a good old-fashioned Wisconsin supper club on a Friday night for a fish fry with my friend, Greg Duckert, the founder of the Virtual Governance Institute. Greg had just finished up a risk school class and had one of his student’s state that he was out of touch as it relates to outsourcing and third parties. The student said disruptive outsourcing is the new direction.

I am familiar with Greg’s course curriculum on the risks associated with third parties and I assured him his material is still spot on and not out of date. All of us who have done any type information security or compliance work for an organization know that outsourcing, third parties and business associates can pose significant risk to the business and need to be managed properly. All of the regulatory bodies and standards have been re-written as of late, to include more rigor as it relates to vendor management because recent breaches.

Because I was not familiar with the term, I had to ask; What is disruptive outsourcing?  As best as we could tell disruptive outsourcing is a cloud-based, automated AI solution that delivers outsourced solutions for various businesses. It is being used today for recruiting, search engines and other industries. We were not aware if it had entered into the retail or healthcare markets to service specific functions, but did not think it was far off. Greg and I started to discuss, what if the disruptive outsourced solution was compromised and information was leaked. How do standards and regulations such as GDPR, HIPAA or PCI DSS come into play. There are very defined rules from a compliance perspective, but there is an expectation that traditional outsourcing is in play, not an AI solution making the decisions.

We started think about the risks and issues if there was a breach situation and the questions that came to mind were:

  • Who is responsible or liable?
  • What is the jurisdiction? And does the jurisdiction reside where the IP address resides in the cloud? And what happens if that IP address is spoofed?
  • What is the recoverable property or base in that jurisdiction?

Like many of Greg and I’s Friday evenings, we left the conversation with more questions than answers. In the following days I started to give this subject a little more thought and did a little more research. I was probably over thinking this, but I wanted to try and grasp the risks associated with implementing or contracting into this type of technology. I also believe Greg’s student was really referring to disruptive technology not just the outsourcing of that technology.

I reached out to a co-worker, colleague and friend of mine, Michael Gerdes, Director of the Information Security Center of Expertise for Experis and asked him to share some of his thoughts on the subject of disruptive technology.

Mike stated that once you get past the obvious marketing hype, the core of what most people consider truly disruptive technology today is just what Greg Duckert and I were thinking. “It is an emerging use of AI in place of people doing jobs that are either programmatic or have deterministic or predictable responses from whatever inputs are received”. Truly disruptive solutions are those that incorporate some form of autonomous decision-making and go beyond the mere combination of Robotic Process Automation (RPA) and cloud computing.

We agreed that I was definitely not over-thinking the topic as it has huge implications to organizations that don’t properly deal with the new forms of risk and potential threat vectors that accompany this platform. The variations on how AI can get into processes and disrupt how normal protections and security controls operate without human intervention is endless, and some of those variations may not even be predictable once the underlying system involves fully-functional AI.

Mike was willing to offer his insights to the questions posed above but only after providing this tongue-in-cheek legal disclaimer: The following responses represent my opinions, and not that of my employer, and are not based on any bona fide legal training, so these responses should not be considered legal advice.

  • Who is (legally) responsible or Liable?

This could vary depending on the type of information and the legal, regulatory, contractual and/or industry requirements the organization is dealing with, but I’d suggest the organization that solicited or accepted the information will remain responsible even if they (directly or indirectly) utilize subcontractors or outsourcing agreements to augment their physical role in the collection, processing, storage and management of the data. This is similar to companies that use cloud service providers to process ePHI under HIPAA – they retain the overall responsibility to protect the data, but they can implicate any of the processors if those providers or partners are not fully compliant with HIPAA rules. As for who is liable – in most instances, discussions about liability come down to who is legally responsible for the protection. That person (or entity) may involve other parties that were performing services or action on their behalf, but the entity that instigated the collection or any subsequent action performed on the data retains the ultimate responsibility, and therefore the legal liability, for ensuring the prescribed controls and protections were used.

This will be an important subject as organizations move forward with the use AI technology and disruptive outsourcing.  What is acceptable in one country may not be so in another.  We may already be seeing this with the recent GDPR fine of Google by France’s data protection regulator, CNIL. This lawsuit may not be directly related to disruptive outsourcing, but compiled data aggregation and correlation without controls could be a source of litigation. We will need to watch this suite closely to see how things unfold.

  • What is the jurisdiction? And does the jurisdiction reside where the IP address resides in the cloud? And what happens if that IP address is spoofed?

For most regulatory controls, the primary jurisdiction will be that of the initial entry point of collection for the processes involved, but it can get complicated if the overriding regulation holds the instigator of the process accountable, such as what happens under GDPR. For example, if data for EU data subjects is collected by a process that is present in multiple geographies and is outsourced to a service in a non-EU nation (including the AI engine) by an EU-based company, it is more than likely the Data Commissioners in the EU could assert they have jurisdiction because the data subjects and instigating company are both within the EU, and at least some of the data was collected in the EU.

As for the assertion the jurisdiction is determined by where the IP Address resides in the cloud - I believe this is something that is still a subject of debate in legal, privacy, and security circles. While an IP address might start off being a source of info that helps determine the initial jurisdiction, ultimately, I believe most legal opinions would hold that the IP address is an indicator of source that has very poor integrity, and so it is not something that can be trusted. It is possible (or even likely in cases involving fraud or deceptive practices) that the jurisdiction might shift if additional evidence is submitted or gleaned from the parties involved that discredit the validity of the address being associated with the actual processing of protected data.

Jurisdiction by IP address can throw the determination off track, because the initial entry point with most AI related solutions starts with some data entry that is discretely associated with a location, but then the AI continuously seeks out and collects additional data from any available data repositories in many geographies to make decisions, and in some cases that data is discovered and input into an aggregated repository.  Search engines for marketing to specific demographics already work that way.

  • What is the recoverable property or base in that jurisdiction?

While the intellectual property or financial assets that the information might represent can (in most cases) be calculated, and the distribution of the value within each of the jurisdictions involved can be determined, in many cases the degree of recoverability may be far lower than the value that was determined because of local laws involving data protection and recoverability. While I believe you may be able to demonstrate there was a certain value of information lost within a jurisdiction, there may be no legal remedies to recover that value due to weak or non-existent legal remedies. This is one of the reasons the GDPR has strong requirements to control where protected data is being collected, stored, and processed.

 

This question is one the lawyers and courts will need to figure out until there are consensus definitions of jurisdiction and liability for autonomous data collection and processing that the major players endorse.

Mike and I agreed that the companies that choose to use AI will need to proactively determine a new set of rules for how and when open source information can be aggregated with data solicited directly from data subjects, as aggregation can put an entity over the threshold of what is allowable and what is not. Failing to create these rules could create very significant legal and regulatory exposure of companies employing AI, with potential fines that could cripple their businesses.

There are also significant risks associated with allowing a machine intelligence to seek out and compile additional records from public sources and then autonomously create repositories that link that data to individual records about a data subject; depending on the rules used, there is a distinct probability the composite records created by the AI could exceed the levels of private/protected data collection and use that are allowed, where the collection and use of the individual elements did not. Both Mike and I were not aware of any statute or regulation that allows aggregation of information to be exempt from the controls that apply to initial data collection, so legal and regulatory compliance of autonomous data aggregation might be one of the key risks/legal liabilities that will come with this technology.

This brings up the question, who is going to audit the companies that use the technology and what will they be auditing it against?

I also asked my old boss who recently retired, John Hainaut, VP of the Information Security Center of Expertise who has nearly 40 years of experience in IT what his thoughts of disruptive outsourcing was. John kept it short: ”If not managed correctly and cautiously, I Predict Disaster”.

Of course, I envisioned the type of disaster making one wanting to buy a cabin off the grid in a remote location and picking up an 80’s Ford Bronco 4x4 and let things run their course leaving the next generation to figure it out.

While reflecting on the past week, it was amazing to me to see how this student’s simple challenge to a teacher had initiated a much broader and deeper conversation about how to deal with disruptive technologies and processes, and the many challenges associated with autonomous computing.

Let’s fast forward a bit…

Greg and I found ourselves once again together on a Friday night for a fish fry, and the previous conversation started back up and continued. We agreed that clearly there needs to be definition around disruptive technologies and concluded that “outcomes” need to be an anchor in determining and managing the associated risks. There also needs to be an agreed upon set of rules governing when AI technologies collect data, to limit how that data is gathered and presented for the solution they are contracted to serve, and how to determine the assurance level that is provided by these services.

We believe Process – Outcome based risk models using data analytics could provide that assurance and definition around AI-based outsourced solutions, and that Process – Outcomes would be a viable basis upon which contracts and agreements are legally drafted.   

This brings up the next question, who out there is one of the leaders in defining, building and educating professionals to Process – Outcome Risk Models. There are well defined and mature risk assessment models out there that could define analysis around disruptive technologies, but Greg and I believe the Virtual Governance Institute (VGI), which we founded is perhaps the leader. The courses and certifications VGI provide are unique in nature because they give the professionals and students an ability to assess organizational risks using data analytics (real data).

The CCARDA (Certified in Continual/Continuous Audit & Risk Data Analytics) certification and training is supported by a curriculum that is designed to guide individuals in building risk models that are tailored to the organization that they are working with and leverages the data available within the environment.

The focus of this curriculum is the determination of the key data (key outcomes and key risk indicators) to be used in risk evaluation. Once the appropriate data has been identified, the correct analytic technique is selected to determine the point of risk that needs to be addressed. Inherent in this certification is teaching practitioners the technical methods to extract and manipulate data using means that are repeatable for determining and addressing risk.

The ability to build Process – Outcomes risk models will be key to provide the guidance needed moving forward with AI technology and disruptive outsourcing. We believe individuals possessing the education and certifications in data analytics such as the CCARDA certification, will be the professionals in high demand to define rules and guidelines that will need to be followed.

A recent CCARDA certified professional, Larry Boettger, made the following comment after he finished the CCARDA training and became certified:

"I've been thinking that the risk management / audit process has been flawed for several years, but I could not articulate or pinpoint why. Greg and VGI not only hit the nail on the head with the problems (wasted time and money, incorrect/outdated data that arrives at the wrong conclusions, and overall archaic ways of thinking and doing the work), but provides an effective and efficient method/solution for doing it right. This course should be required for everyone in the field of risk management and auditing." Larry Boettger - Director of Information Security, TASC

 In closing…

How industries, businesses and people will deal with disruptive technologies in our near future may present many challenges but it is one that will require careful consideration and definition. And those consultancies that are building risk models using data analytics for their clients will be a step ahead of the others.

Thomas Schleppenbach,
Cyber Security Consultant, IT Security Center of Expertise, Experis
Co-Founder Virtual Governance Institute (VGI)