Back to all blog posts

A proposal for an AI Risk Framework

The use of artificial intelligence has become more common place and so I have begun reading and researching more about AI with regards to privacy. AI is a topic of interest for many not only because it allows for increased efficiency in many areas, but also because it opens up new ways to analyze and utilize data. Of course, there is risk present with the expanding use of AI, and like many things in privacy, there should be a framework for that.

AI Benefits

There are a number of reasons to implement or use artificial intelligence for processing data. AI is not prone to human error unless human error is programmed in. Repetitive tasks that require decisions are a great use case for AI. Think about a doctor making a decision on the proper dose for a patient. AI can analyze hundreds of data points, such as weight, gender, diet, medical history, blood pressure, and more to make a very precise decision about the exact dosage of a medication.

Artificial intelligence also can handle massive amounts of information. Human beings that are given larger and larger data sets being to suffer from indecision. So-called “analysis paralysis” leads to individuals being unable to make a decision because there is too much data to process to make any decision seem better than other, so every decision seems equally good. AI, being free from stress, follows the program and makes decisions based on that large amount of data. AI actually flourishes with more data, as it can make more precise and accurate decisions with more information. A list of potential customers for marketing will receive the best offers or ads with more data, for example.

While this is also the subject of many a post-apocalyptic science fiction story, the lack of emotion from our electronic assistant is a benefit. Humans are prone to emotional interference when making a decision. This is most likely a good thing, as cold, emotionless people tend toward more negative outcomes. AI being devoid of human feelings means they can make difficult decisions without hesitation. A doctor, however, may be faced with an almost impossible decision in a situation where deciding to save one patient means the other will die. Furthermore, a decision does not need to be the only outcome. AI could also provide insight to help humans make a more informed decision.

Common AI Concerns and Risks

Where there are benefits there are also risks. Each benefit above is also a concern for many looking into AI. AI is consistent, and that means that it can be limited in the range of outcomes it can deliver or that certain outcomes can be gamed for. While not a privacy example per se, many video games will have options for AI opponents. An opponent that a player knows will behave a specific way is easy to defeat once they know the end state they are aiming for. Similarly, if AI is used by customer service, if I know what questions it will ask to verify my identity, I can use that to take advantage of the system and potentially gain access to another person’s profile or account.

AI will only ever do what you tell it to. Bias is not initiated by AI, but humans who create AI are susceptible to bias in many ways. A programmer who favors particular products or behaviors could easily reflect their preferences into AI they create. AI that assesses resumes could easily discriminate against someone who attended community college, or no college at all, simply because college attendance, or specific schools such as Ivy League colleges, were valued by the programmer.

Where large data sets are concerned, too much information can lead to invasive AI or unintended processing of information. Consider an example of a phone app used to identify birds based on their sounds. Using only a bird sound is not invasive and isn’t going to use personal information. However, if the app were to track which birds a user identifies, the user’s location could be determined using the migratory patterns of identified species, time of day, and season. The more data this application collects, the more likely it is to identify a user or infer other information that the user would have not wanted to reveal. It would not be the intent of such an app to glean that someone is in New Hampshire simply because they were near a purple finch (that’s the state bird by the way).

Addressing Concerns and Risks

Privacy and Privacy by Design are focused on risk. Risk may come from using information that is sensitive, a lack of security, or third-party involvement, among other factors related to processing personal information. AI poses a unique risk because of the unique way data is processed, mostly free from human error, inconsistency, emotion, or bias beyond what was programmed in. This means that when considering privacy risks for AI, we are hyper focused on the processing of the data, including outcomes and context of that processing.

For our proto-framework, I propose 7 principles for uses of artificial intelligence. They are summarized below.

  • Necessity – All the information is needed for this particular process. If you could replace or remove the information without impacting the process, then you should remove or replace it.
  • Proportional – A reasonable person should agree to the use of this information for processing in this way. For example, the use of social security number for unique identification versus a name and phone number.
  • Non-Invasive –Information that is gathered does not intrude upon or otherwise violate a reasonable expectation of privacy, including information that is inferred by processing.
  • Notification – Any individual who has their personal information processed must be notified that the processing will utilize artificial intelligence, whether partially or entirely.
  • Individual Decisions – Where artificial intelligence was used to process an individual’s personal information, those individuals should have access to outcomes or inferences made as well as the ability to make typical data subject requests (access, rectification, erasure, objection, restriction, etc.)
  • Review Regularity – A process to regularly review artificial intelligence, including logic, outcomes, or logs, is established to monitor and improve the process and ensure that information is processed appropriately.
  • Access Review – The individuals that have access to the artificial intelligence platform or aspects of that platform are logged and monitored for security of the information.

Ideally, we want to create a set of rules, or a framework, for reviewing the use of AI against these principles. As organizations begin to use AI more, and as technology improves further, individuals and companies will need to consider if processing in this way is too risky or requires further attention.

Please feel free to reach out to Privacy Ref with all your organizational privacy concerns, email us at info@privacyref.com or call us 1-888-470-1528. You may view our complete event calendar here, which includes our training and webinars.