Back to all blog posts

AI Criteria: Proportionality

Where necessity is all about whether AI-based processing would be possible, proportionality is all about the acceptability of the processing. Proportionality can also refer to the amount of information collected, as processing too much information can be directly disproportionate to the task at hand. However, the focus for me is whether the average person would accept the use of their information by artificial intelligence.

AI Criteria image

Artificial intelligence is a pretty powerful tool capable of processing large amounts and diverse kinds of information. A doctor could keep in mind a number of factors, like a person’s age, weight, diet, and pre-existing conditions amongst others, but doing so would be difficult. AI could keep track of everything and monitor a patient, allowing real-time diagnosis. However, AIs will only ever do what they are told, and no two humans are the same, so we would want to rely on some level of human intervention. Given the same symptoms, the AI will recommend the same treatment, but many conditions have similar symptoms, meaning without review, two people could get the same treatment with vastly different outcomes. One could have a cold and the other a severe infection and given the same treatment, the infection could get much worse or the cold be severely over treated. Given this many people would be hesitant to accept an AI acting autonomously to treat them, especially the more serious the condition or diagnosis. That said, if a doctor provided input and determined the final course of treatment, most humans would be less hesitant as most people trust their doctor.

An average person’s tolerance for AI processing their data is where I want to focus. What is the tolerance threshold for the processing of their information by an AI in a given circumstance?

Let’s propose a different scenario, involving AI created to play poker against casino patrons. To keep this relatively simple, imagine a one-versus-one scenario. The casino, in order to save money on hiring multiple dealers, decides having

To play against a human, the AI needs to be trained on how to play poker, how to identify the cards, and then some logic to determine betting strategy or how much risk the AI would accept. The problem the casino runs into here is that a simple AI will be very straight forward, thus easy to figure out and game, which will most likely lose the casino money. Casinos operate at an advantage against players, which is how, even if they only win 51% of the games they play, they play so many that they end up profitable. To combat this, an executive comes up with the bright idea of using facial recognition and cameras to train the AI to detect if the players are bluffing.

Beyond the privacy risk of now processing biometric information, this brings in what I call “the creepiness factor”. The human player here may be uncomfortable knowing that the casino machine is capturing and analyzing their biometric information. The player may also complain that the machine has an unfair advantage, and they are unable to “read” a machine or see its tells. These are just two reasons likely to deter someone from playing that machine. Having empty tables or machines is a potential loss considering the cost of maintaining the machine. Anytime this machine requires maintenance or makes a mistake that needs immediate attention, the casino loses money and time.

Based on this risk assessment, the use of biometric information is disproportionate given that it may make patrons uncomfortable and it provides little benefit over hiring a dealer or maintaining a similar but less invasive machine. The result of this assessment would suggest not using the AI in this way.

Most uses of AI are more mundane than this example. Proposed uses of AI encompass identifying trends in a customer’s purchasing habits or looking for trends in traffic patterns on a busy street. Even in these cases we need to be careful of proportionality. Tracking the number of cars that use an intersection to determine whether a traffic light is needed would be acceptable. Using a camera with an AI and gathering information on the types of cars, speed, and direction of traffic will, at most, provide marginal benefit while collecting information that may be sensitive in nature. This may create a situation where a lawyer or insurance company requests data of an accident captured by the camera as part of their case or claim. To finish assessing the need for a traffic light, only the amount of traffic and time of day are needed, and anything else is disproportionate.

The review of the proportionality of the processing activity should include a determination not only of whether the information is necessary, but also if it would be reasonable given the circumstances. I always say you do not need to provide government ID to buy a pizza, and I think a reasonable person would agree. Considering proportionality will prevent unreasonable uses of data, especially if there is another way to go about that processing.


Reach out to Privacy Ref with all your organizational privacy concerns, email us at info@privacyref.com or call us 1-888-470-1528. If you are looking to master your privacy skills, check out our training schedule, register today and get trained by the top attended IAPP Official Training Partner.