Back to all blog posts

AI criteria: Non-invasiveness

Privacy is often considering whether or not a use of information is appropriate. What is or isn’t appropriate is based on regulations and rules, but as I had written elsewhere, your own feelings might play into that as well (Empathic Privacy). Being non-invasive is a matter of considering if the processing of data, in this case by an artificial intelligence, would expose data or aspects of an individual they would not expect or want revealed.

Non-invasiveness has a lot of overlap with proportionality but is more focused on the particular outcomes of processing than the inputs. Proportionality asks if someone would be okay with data being processed, the data going into the system. Non-invasiveness is asking what information is revealed or generated by an AI, the information that comes out of the system. The main concern is the combination of data sets to reveal new information through inference.

The most vulnerable piece of information for this is location-based data. Location on its own can reveal a large swath of information, such as determining whether a person went to a specific building like a store or another location like a park. If you combine just one more piece of information, that being time with date included, you could know exactly where someone went and know why. For example, a person who is going to an office building earlier in the morning is probably there for work, while someone who comes later may have an appointment, like an interview, or may work a later shift or be part of overnight security or janitorial staff. Having even more information will make these assumptions even more precise.

This can lead to problems for both the data subject and the organization using the AI. Current predictive algorithms have this problem as well, such as when Target determined a teenage girl was pregnant, sent her ads for maternity products, and revealed her pregnancy to her father before he knew she was pregnant. This is very private and personal information, and revealing it could cost an organization a customer or customers if and when that story gets around. More importantly, depending on the jurisdiction you are in, revealing that information could result in some kind of harm to the data subject, being reputational, monetary, or otherwise.

Imagine knowing someone’s location, the time and date they arrived and left, and also the same information for every other person that was at that location. That information alone could allow us to infer who is there and why, but with just two more data points we can get to a point where it gets creepy. If we have the information for not just a single day, but a whole year, we can know who arrives at the same time and infer relationships, be they friends or more. You could learn about someone’s romantic relationships based on who they meet if they leave with the person they meet, if they meet them more than once, and more. Think of the scandal when Ashley Madison was breached, and imagine how much worse something, as described above, could be if a breach occurred.

While these inferences can help us, such as a doctor being able to better treat a patient or a retailer being able to provide ads that are of actual interest to the data subject, the downsides can be much steeper. Go a step too far and now you are revealing sensitive information about a data subject, such as their sexual orientation, their religious belief, or a medical condition. If your AI determined someone was in an abusive relationship, how would you handle that? Do you report it to the police? What is at risk if you infer this information? Is there privacy risk, or is there now legal risk too?


Reach out to Privacy Ref with all your organizational privacy concerns, email us at info@privacyref.com or call us 1-888-470-1528. If you are looking to master your privacy skills, check out our training schedule, register today and get trained by the top attended IAPP Official Training Partner.