[You can also read part 1 and part 3 of this series.]
Five days later, Panos Moutafis, co-founder & CEO of Zenus, the “ethical facial analysis” company, responded. I find his response inadequate, and this post explains why. I’ve included portions of Moutafis’s response, quoted in red, together with my comments. I conclude with a summary if you want to skip the details.
Here we go.
After an introduction …“Ignorance can be bliss, but it can also be dangerous.” Moutafis begins:
“Data from our ethical facial analysis service cannot be used to identify individuals. This is not an opinion. It is an indisputable fact.”
If the “Zenus AI” system is, in fact, completely unhackable, this statement may well be true. But it’s misleading because it does not address attendee privacy concerns. Why? Because, as I explained in my original post, combining Zenus facial analysis data with other attendee identification technology allows event owners to associate Zenus data with individual attendees.
Moutafis now admits this is possible, as his response now includes statements about how the Zenus system should be used. As far as I know, Zenus has not made these statements publicly before.
“If someone wants to use other technologies to identify individuals and combine the data [emphasis added], they need to obtain explicit consent first.
This is true of hotels, convention centers, event organizers, technology companies, etc. Otherwise, they are exposing themselves to liabilities.
A legal review takes place before starting to use a new service in this manner. People who work in the corporate sector and associations are familiar with these processes. This is not the Wild Wild West.”
The crucial phrase here is “and combine the data“. Moutafis is saying that when combining attendee tracking data with data supplied by the Zenus system, attendees must provide explicit consent. That means attendees must be informed about this in advance. And they must give explicit consent for event owners to use real-time continuous data from Zenus’s system to provide additional information on each attendee.
In my original post, I noted that Moutafis tries to put all the responsibility for such consent on the event owner and/or supplier of the attendee identification technology rather than his company. We’ll see why he needs to do this shortly.
GDPR and Data Privacy Regulations
Different regions and implementations have different requirements.
The European Data Protection Board, in particular, has clearly noted that facial analysis alone does not fall under Article 9.
See section 80 in the Guidelines adopted on January 29, 2020 [link].
“However, when the purpose of the processing is for example to distinguish one category of people from another but not to uniquely identify anyone the processing does not fall under Article 9.”
See section 14 in the Guidelines adopted on April 26, 2023 [link].
“The mere detection of faces by so-called “smart” cameras does not necessarily constitute a facial recognition system either. […] they may not be considered as biometric systems processing special categories of personal data, provided that they do not aim at uniquely identifying a person […] .”
In simple words. Are you using the service alone? Great.
Are you combining it with identifying information? Obtain consent or face the consequences. The pun is totally intended.
This section restates that the Zenus technology satisfies European Data Protection Board guidelines only when used in isolation. It confirms that clients combine Zenus analytics “with identifying information” “you” must “Obtain consent or face the consequences.” Again, the “you” is any entity but Zenus.
In addition, to bolster his case, Moutafis does a selective quote of section 14 in the Guidelines adopted on April 26, 2023. Here’s the entire section 14 with the portions Moutafis omitted in bold:
“The mere detection of faces by so-called “smart” cameras does not necessarily constitute a facial recognition system either. While they also raise important questions in terms of ethics and effectiveness, digital techniques for detecting abnormal behaviours or violent events, or for recognising facial emotions or even silhouettes, they may not be considered as biometric systems processing special categories of personal data, provided that they do not aim at uniquely identifying a person and that the personal data processing involved does not include other special categories of personal data. These examples are not completely unrelated to facial recognition and are still subject to personal data protection rules. Furthermore, this type of detection system may be used in conjunction with other systems aiming at identifying a person and thereby being considered as a facial recognition technology.“
Wow! Moutafis omits the “important questions in terms of ethics and effectiveness” raised by facial analysis. And, tellingly, he cuts the last key sentence entirely:
“Furthermore, this type of detection system may be used in conjunction with other systems aiming at identifying a person and thereby being considered as a facial recognition technology.“
This, of course, is exactly what Moutafis admits happens if clients use Zenus technology with any other tech that identifies individuals.
So the European Data Protection Board guidelines say that Zenus’s system effectively becomes a facial recognition system under these circumstances.
That’s not what Moutafis implies. I’d describe this section of Moutafis’s response as deliberately misleading.
Our AI badge scanning reads attendee IDs
I have little to say about this. Badge scanning tech is common at meetings. If attendees give informed consent and can opt out of badge scanning, I don’t have a problem with it. But perhaps this is a place to point out the significant difference between technology (badge scanning) that identifies attendees only at discrete attendee-determined points in time, and technology (Zenus plus attendee identification data from a separate system) that continually accumulates attendee data all the time attendees are in sensor range.
Legal vs Moral Considerations. Consent vs Notice
“People often conflate face recognition (identification) with facial analysis (anonymized data). In a similar way, they conflate legal and moral considerations.”
That’s quite a comparison! It’s saying being confused about the definitions of two types of technology is similar to being confused about legal and moral concerns of the use of such technologies.
“It might not be legally required to provide notice about the use of facial analysis in many settings. But we still think it is morally a good idea to do so in the spirit of transparency and education.
Therefore, we ask our clients to post signage on-site, talk about the use of our service in their marketing communications, and include it on their online terms and conditions.
According to the people I’ve spoken to who attended the association meetings described in my original post where Zenus technology was used, there was no “signage on-site, talk about the use of our service in their marketing communications” or notification in the meetings’ “online terms and conditions“. Perhaps the folks I talked to overlooked this “advance notice”, or these meetings were the exceptions rather than the rule. But from this limited data, it doesn’t seem that Zenus’s clients pay attention to what Zenus says it asks them to do.
What about consent versus notice? Advance notice we love. Consent defeats the purpose of anonymity.
How could one exclude a person from the anonymous analysis (if they opt-out) without identifying them? They cannot.”
Finally, we get to why Zenus continues to insist that their technology does not require consent while trying not to mention that when it is used in conjunction with attendee identification technology it does require consent. There is no way for Zenus data to remain anonymous if attendees are given the right to not consent, i.e. to opt out of being included in Zenus’s aggregated analytics! That would require the identities of attendees who have opted out to be injected into Zenus’s internal systems, which would then need to perfectly exclude them from the data fed to clients. This obviously can’t be done in a way that satisfies privacy laws. Consequently, Zenus’s whole “no consent needed” house of cards collapses!
Aggregate vs Individual Analysis
“The chances that one would analyze a person’s face or body language and infer their psychological state are slim.”
This is a strange statement. Human beings have evolved to be exquisitely sensitive to other humans’ psychological states. Most of us do such analysis unconsciously every day, whenever we are together with other people. We look at someone’s face or body language and think “They look upset/happy/worried/tired”. We might well say to them: “Are you OK?“, “Wow, you look happy!”, “You look worried about something”, “Want to take a rest?”, etc. I’d say that inferring the emotional state of someone we’re with is default behavior, rather than a slim probability.
Of course, this statement allows Moutafis to pivot to his marketing pitch:
“…analyzing a room of people multiple times per second and combining this with survey and attendance data can be insightful.”
Because that’s what Zenus has designed its technology to do.
Concluding Remarks
“Our ethical facial analysis brings organizations valuable and actionable data without crossing the line into collecting personally identifiable information.”
One more time. When you don’t include any meaningful safeguards to prevent combining your data with that of other systems that clients are free to employ, clients can easily use Zenus technology to “[cross] the line into collecting personally identifiable information“.
“It is a rare example of technology using restraint. It is an example of building proactive privacy safeguards by default. It is an example to follow.”
Sadly, it’s not. While I admire the efforts that Zenus has made to create an “ethical facial analysis service”, as I’ve now outlined in these two posts, the company has not succeeded.
Conclusions
Zenus claims that its system when used in isolation at an event doesn’t supply data about individual attendees. Maybe so. But when used in conjunction with additional tech (XYZ) that identifies individual attendees, event owners can use Zenus data to create a continually updated real-time dataset of analytics of identified individual attendees. Zenus deflects any legal or ethical company responsibility for this surveillance by saying it’s the event owner’s and/or XYZ’s to inform attendees and obtain their explicit consent to be tracked and their facial analysis used.
Crucially, Moutafis says two contradictory things.
The use of Zenus technology doesn’t need explicit consent.
The combination of Zenus technology with other attendee identification technology does require explicit consent. But that’s the legal and ethical responsibility of the event owner or the tracking technology company. Not Zenus.
Because Zenus does not require their clients to forswear using additional attendee identification technology, this, therefore, creates a fatal contradiction for the company. Why? Because, as Motafis admits, when attendees are allowed to opt out from its use—which is their right under privacy laws—there is no way for the Zenus technology to work without excluding the attendees who have opted out. To do this, the Zenus system must be able to identify individual attendees! Consequently, Zenus’s whole we-don’t-identify-individuals and no-consent-is-needed house of cards collapses!
Two unanswered criticisms from my original post
First, Moutafis was quoted as saying publicly that “some of his clients…will monitor [using Zenus AI] in real-time, and if a speaker is killing the mood they will just get him off the stage”. I said I was pretty sure that most event professionals would agree this is a highly inappropriate way to use Zenus’s technology. Or, as the Harvard Business Review put it, “AI Isn’t Ready to Make Unsupervised Decisions“. Moutafi did not respond to this.
Second, it’s important to note that Moutafis didn’t respond to a key critique of Zenus technology that I shared in my original post.
Namely, how useful is Zenus’s technology anyway?Kamprath and I gave examples of how often the most impactful sessions at meetings—impactful in the sense of changing future behavior rather than entertaining an audience—can be somewhat uncomfortable for participants at the time. Not all sessions are a “success” when people express “positive sentiment.”
One more thing…
OK, that’s two thousand more words from me on this topic, on top of four thousand last week. Hopefully, that’s enough for now. But I’d be happy to meet in a public moderated discussion with Zenus. If anyone would like to host such a discussion, don’t hesitate to get in touch!
Should the event industry embrace facial analysis — a technology that promises to offer new analytic data to event stakeholders?
In this post, I’ll explain why I’m concerned. I’ve included:
An introduction to facial recognition and facial analysis;
A timeline of recent public experiences and responses to the use of facial analysis at events;
Why I think the use of this technology is misguided, ethically and legally dubious; and
My conclusions.
An introduction to facial analysis and facial recognition
You might be wondering what facial analysis is, and how it differs from facial recognition. Here’s a short introduction to these technologies, and how the meeting industry is starting to use them.
Facial recognition and analysis technologies capture information from images and videos of human faces. They have been available since the 1960s. But in the last decade, the use of facial recognition has exploded. In 2017, Apple introduced Face ID to unlock its phones and authenticate payments. Many manufacturers have since incorporated this form of biometric authentication. Governments have adopted biometric systems to meet security concerns. Such systems are appearing in public arenas like airport gate check-ins too.
So it’s not surprising that companies have developed facial technologies to provide new forms of data to event owners.
In the event industry, companies have developed facial recognition systems to streamline event registrations. Some can also track attendee movement inside a venue. These systems work by matching a pre-event registered attendee photograph, provided by the attendee, to the attendee’s face as they arrive at the event. If a match is found, the attendee is admitted without having to show proof of registration.
In a July 2023 post, Miguel Neves, editor-in-chief of Skift Meetings, describes “The True Risks of Using Facial Recognition for Events“. He includes an incident where an event required thousands of attendees to upload scans of their passports to attend in person. This led to a €200,000 fine by Spain’s data protection agency. Incidents like this may have led Zenus to focus on facial analysis rather than facial recognition.
Facial analysis
Facial analysis claims to overcome such privacy concerns by avoiding the collection of individuals’ data. The concept is that a collection device collects and analyzes incoming video data. In theory, only aggregated group statistics are provided to clients. Thus personally identifiable information is, hopefully, not directly available from the system.
The aggregate data provided by these systems typically includes “impressions” (the number of people present over time), demographics (sex and age group), “happiness”, and dwell time (how long people stay in a given area and/or how much attention they are paying to what is going on).
Illustration from Zenus website showing “Sentiment Analysis” data
Companies developing facial analysis for the events industry include Zenus and Visage Technologies.
A timeline of public experiences and responses to the use of facial analysis at events
February – March 2023
Controversy about facial analysis at events began when Greg Kamprath, after attending PCMA‘s Convening Leaders 2023, made excellent arguments against using the technology at meetings in a February 2023 LinkedIn post “You Shouldn’t Use Facial Analysis At Your Event“. He wrote the post after attending a session titled “AI, Biometrics and Better, More Targeted Experiences”. There he “was surprised a few minutes in when they told us we were being watched at that moment by cameras which were analyzing our age, gender, and emotions”.
To summarize, 2023 started with criticism of using facial analysis at events and continued with a rebuttal, followed by continued adoption of this technology by major industry associations.
Here are my responses to Moutafis’s rebuttal, listed under the same headings he uses. Afterward, I’ll add some concerns that he doesn’t address.
Concern 1: I don’t want to be analyzed
“When the analytics obtained from a service (any service) cannot be tied to a specific individual, it does not infringe on their data privacy.” —Moutafis’s first sentence after this heading
Unfortunately, this statement is misleading and wrong.
Let’s assume that the Zenus facial analysis system is indeed perfect and unhackable in any way. Consider the system running at an event in a room with only one person in it. The system works perfectly, so the data it provides accurately characterizes that person, but does not include any information that allows their identification.
If this perfect Zenus system is the only attendee data acquisition system in use, then that person’s data privacy isn’t infringed.
But what if an additional attendee data acquisition system is being used in the room? For example, here’s a screenshot from a Zenus video “Zenus AI: Ethical facial analysis at IMEX” uploaded to YouTube on November 13, 2022, and still, as I write this, publicly available.
January 2023 screenshot from Zenus YouTube video “Zenus AI: Ethical facial analysis at IMEX” https://www.youtube.com/watch?v=iU2MPjacpjI showing an attendee’s sentiment analysis and badge information
Zenus technology identified the attendee along with his sentiment analysis! (And, as I write this, still does—see below.)
This is certainly at odds with Zenus’s claim of “ethical facial analysis”.
Even if Zenus stops doing this, there’s nothing to prevent an event owner from using an additional system that does identify individual attendees. The information from Zenus’s system can then be added to the lone identified individual in the room. The same kind of process can also be used with groups. See, for example, the Electronic Freedom Foundation’s “Debunking the Myth of ‘Anonymous’ Data” for more information on how “anonymous data rarely stays this way”.
What Zenus does
The European Data Protection Board is the European Union body responsible for creating and administering Europe’s General Data Protection Rules (GDPR). GDPR gives individuals certain controls and rights over their personal information. Here is an extract from the GDPR guidelines on the use of facial recognition technology in law enforcement. Note that these are guidelines for the use of such technologies by governments and public entities.
“The mere detection of faces by so-called “smart” cameras does not necessarily constitute a facial recognition system either. While they also raise important questions in terms of ethics and effectiveness, digital techniques for detecting abnormal behaviours or violent events, or for recognising facial emotions or even silhouettes, they may not be considered as biometric systems processing special categories of personal data, provided that they do not aim at uniquely identifying a person and that the personal data processing involved does not include other special categories of personal data. These examples are not completely unrelated to facial recognition and are still subject to personal data protection rules. Furthermore, this type of detection system may be used in conjunction with other systems aiming at identifying a person and thereby being considered as a facial recognition technology.” [emphasis added] — European Data Protection Board Guidelines 05/2022 on the use of facial recognition technology in the area of law enforcement • Version 2.0 • Adopted on 26 April 2023
“Zenus also provides a separate, unrelated QR code service for attendee tracking at events. In this service, the customer or reseller can include a unique QR code on each event attendee’s badge. When the Zenus IoT device scans a QR code at the event, Zenus will receive a record that the QR code was scanned by a particular scanning device at a particular date and time. Zenus then makes that data available to the customer or reseller. Zenus has no ability to link the QR code with a particular individual’s real identity, as Zenus does not accept any other information about the individual. Only the customer or reseller holds data that allows them to make that linkage. Zenus uses the QR code data solely to serve that particular customer or reseller as the customer’s or reseller’s “service provider” within the meaning of the California Consumer Privacy Act (“CCPA”) and “processor” within the meaning of the General Data Protection Regulation (“GDPR”) and similar laws.”
In other words, Zenus provides a service that allows customers to track individual attendees! Zenus says this is OK because Zenus doesn’t have access to individual attendee information. But Zenus clients do!Unless each attendee consents to being tracked, this is a violation of GDPR.
“Consent must be freely given, specific, informed and unambiguous. In order to obtain freely given consent, it must be given on a voluntary basis. The element ‘free’ implies a real choice by the data subject. Any element of inappropriate pressure or influence which could affect the outcome of that choice renders the consent invalid.” —extract from GDPR Consent definition
Moutafis ends this section by saying that “events are spaces of high visibility”, where attendees wear badges with their names, agree to be photographed, and provide information to registration systems. The implication is that, therefore, attendees have no reason to object to automated systems that vacuum up their visible behavior.
This is like saying that people in a public space who are talking to each other shouldn’t object if systems with sensitive microphones pick up all their conversations and make use of them. Just because you can do something, doesn’t mean you should.
Concern 2: Advance notice about the service
I’m glad that Moutafis says “We advocate for advance notice because it is the best way to build trust in the community”. Even though the company claims that “Consent is not required”.Whether event owners actually give advance notice is, however, an important question. I’m inclined to judge people and organizations on what they do, rather than what they say. And, as Kamprath noted in his LinkedIn post, in February 2023, PCMA Convening Leaders (PCMACL) did not inform attendees in advance that facial analysis would be used and he saw no signage at at the event. In his rebuttal, Moutafis says, “CCTV systems exist in all public spaces, along with disclosures about camera surveillance [italics added].” So? Zenus and PCMA apparently did not provide advance notice!
Fortunately for both these organizations, PCMACL 2023 was held in Ohio, which does not currently have a law protecting privacy. If the event had been held in California, for example, their failure to give advance notice would be a violation of the California Consumer Privacy Act, and the California Attorney General or the California Privacy Protection Agency could take legal action against both organizations.
Providing a facial analysis system to anyone who wants to use one and merely suggesting that they let the subjects know it is operating is unethical, in my opinion. A sticker on a tiny camera is simply inadequate. Providing advance notice via visible and plentiful signage should be a requirement for obtaining and using this technology. It would be even better to prominently include advance notice in written communications to attendees when registering.
Privacy protections in other U.S. states
I don’t know the U.S. states where such a failure to adequately inform in advance would currently violate state law. But as I write this:
California, Colorado, Connecticut, Utah, and Virginia have privacy laws currently in effect;
Florida, Montana, and Oregon will have privacy laws in effect by the end of 2024; and
Delaware, Indiana, Iowa, Tennessee, and Texas will have privacy laws in effect by January 1, 2o26.
More details on state laws can be found at DataGuidance.
Concern 3: The system does not do what we are told
Moutafis seems to include two issues under this heading. The first is his claim that Zenus’s system provides accurate information about “aggregated statistics on impressions, dwell time, age, biological sex, and positive sentiment, among other metrics”. The second is that people worry that the Zenus devices might be hacked.
I can’t evaluate the accuracy of the data provided by Zenus’s system. However, research indicates that
Moutafis says that the Zenus service “complies” with GDPR rules. While fully anonymized data is not subject to GDPR rules, combining Zenus’s data with data from other systems can, as we’ve seen, lead to Zenus’s customers adding Zenus data to an individual’s data. Without advance notice and consent, this situation is a violation of GDPR and other privacy laws.
But, again, the overall security of any technology is defined by its weakest component. As described above, if an event owner adds a system that does identify and/or track individual attendees, whether Zenus’s stand-alone technology obeys “GDPR rules, [survives] third-party penetration tests, [or meets] SOC 2 standards” becomes irrelevant, as its output may now add to the data captured by the weaker system.
Concern 4: Decisions shouldn’t be made with AI
Kamprath quotes Moutafis as saying at the PCMA Convening Leaders session: “[Moutafis] said some of his clients…will monitor in real time and if a speaker is killing the mood they will just get him off the stage”. Moutafis’s rebuttal says: “In these instances, there is nothing wrong with trusting the data to make informed adjustments in real time.”
Really? How many event professionals have been using or are going to use Zenus AI in this way? Not too many…I hope.
Why? Because, as Kamprath points out:
“What if a session’s content is important, but it doesn’t cause facial expressions a computer would categorize as “positive sentiment?” Imagine a speaker who is presenting a difficult truth – someone from a disadvantaged group describing a hardship, or a worker conveying the situation on the ground to leadership. AI facial analysis would show the audience wasn’t happy and so maybe those presenters aren’t invited to speak again. (Or god forbid given the boot in real time)
Exactly. Some of the most important and impactful experiences I’ve had at meetings have been uncomfortable. Moutafi doesn’t seem to realize that not all events are a “success” only when people express “positive sentiment”.
Moutafis tries to dilute his message by adding that “users consider multiple sources of information, including surveys.” But again, how he marketed his technology at PCMACL 2023 tells us more about how he implements Zenus facial analysis than what he says in print.
Concern 5: Cameras may get hacked
I’ve already commented on camera hacking above. Again, I’m happy to assume that the Zenus AI units are “secure enough“. But I will add that Moutafis’s response to reasonable concerns about hacking is, well, hyperbolic.
“With this fearful logic, organizers should start collecting attendees’ phones at the entrance and remove the CCTV equipment from venues. They should also terminate AV companies that stream content, including pointing cameras at the audience and drop all registration companies. After all, hacking a registration company is more damaging than gaining access to aggregated and anonymized data.” —Moutafis
Concern 6: The scope of surveillance will increase
Moutafis says:
“…it is safe to use products with built-in privacy safeguards.
One of the worries expressed was about other computer vision solutions, such as our new badge scanning solution. It detects QR codes up to 6–7 feet from the camera. The service requires explicit consent before data is tied to a specific individual. There are also easy opt-in/out mechanisms to offer peace of mind. It is no different than RFID and BLE used in events for decades. It is no different than manual badge scanning for lead retrieval, access control, and assigning CEU credits.”
The problem with this is that Zenus’s privacy policy makes no mention of requiring “explicit consent before data is tied to a specific individual“! Zenus’s privacy policy only refers to “personnel of our past, present and prospective customers, business partners, and suppliers.”
This is important. Event attendees are not Zenus’s customers!
Zenus is avoiding any legal or contractual responsibility to attendees about how its systems impact their privacy. The organizations that buy Zenus’s systems are, apparently, free to do whatever they like with Zenus’s devices. That includes combining their devices’ output with Zenus’s badge-scanning solution or any other attendee-tracking system. When they do this, the scope of surveillance will indeed increase.
Concern 7: Informed consent
Moutafis says:
“Some people call for mandatory consent requirements for all services — even the ones that do not collect personally identifiable information. But that will result in an effective ban on numerous technological advancements. And the rhetorical question is — to what end? If one insists on that (opinions are a right for all), they should also suggest an alternative solution to offset the cost with an equal or greater benefit. Until then, there is consensus among institutions and practitioners that this is unnecessary because there is no risk to data privacy.”
This is an example of the straw man fallacy. What the vast majority of attendees want is reassurance that their privacy rights will be respected, they are informed about the impact of new technology on their activities, and they have the right to provide or reject consent to that technology being used when it does not respect their privacy rights. Moutafis distorts this into an all-or-nothing demand for “mandatory consent requirements for all services — even the ones that do not collect personally identifiable information”. However, given the failings I’ve listed above, attendees do not currently have the assurance that Zenus’s systems respect their privacy rights in the real world. That’s why his statement is a strawman.
To help protect personal information, we have put in place physical, technical, and administrative safeguards. However, we cannot assure you that data that we collect under this Privacy Policy will never be used or disclosed in a manner that is inconsistent with this Privacy Privacy.”
In other words, “even though we insist our technology doesn’t collect personally identifiable information we can’t guarantee it won’t.”
Good to know.
Conclusions
Whew, this turned into a much longer post than I expected! During my research on the appropriate use of facial analysis, I found three perspectives on the ill-defined legal status of facial analysis that don’t quite fit into my response to Moutafis’s post. I’ve included them here, followed by a summary of my conclusions.
Three perspectives on the legal status of facial analysis
Unfortunately, the legal status of facial analysis remains unclear. The Global Privacy Assembly, “the premier global forum for data protection and privacy authorities for more than four decades”, points this out in an October 2022 report.
Access Now is an international organization that “defends and extends the digital rights of people and communities at risk”. In this submission to the European Data Protection Board, the EU body responsible for creating and administering the GDPR, they say:
“…paragraph 14 [of the European Data Protection Boardʼs guidelines 05/2022] states that facial detection and facial analysis, including emotion recognition, are not types of facial recognition. This goes against the common use of the term facial recognition as an umbrella term for a range of processes, including detection, verification, identification and analysis/categorisation/classification. Arbitrarily excluding detection and analysis from the term facial recognition will only give credence to the problematic line often taken by industry that when they are performing facial analysis, for example, they are ‘not doing facial recognition.’ [emphasis added]” —Access Now submission to the consultation on the European Data Protection Boardʼs guidelines 05/2022 on the use of facial recognition technology in the area of law enforcement, 27 June 2022
Finally, Nadezhda Purtova, Professor of Law, Innovation and Technology at Utrecht University is skeptical that facial analysis will “withstand legal scrutiny”.
“A relatively recent case of such technological development is face detection and analysis used in ‘smart’ advertising boards. Unlike with facial recognition where one’s facial features are compared to pre-existing facial templates to establish if a person is known, face detection and analysis do not recognize people but ‘detect’ them and, in case of smart billboards, classify them into gender-, age-, emotion-, and other groups based on processing of their facial features to display tailored ads. The industry that develops, sells, and employs the technology argues that facial detection does not involve processing personal data, eg because the chance of establishing who a person before the ‘sensor’ is close to null. In part this is due to the ‘transient’ nature of the processing, where raw data of an individual processed by the detection ‘sensors’ is discarded immediately. The technology does not allow tracking a person and recognizing him or her over time either. To be clear, as will become apparent from further analysis, these industry arguments do not necessarily withstand legal scrutiny and it is highly likely that personal data will be processed in these contexts, if the proposed interpretation of identification is adopted. Yet, there is no uniform position on the interaction of face detection and data protection across the EU Member States. For instance, the Dutch data protection authority considers face detection in the context of smart billboards as processing of personal data, while its Irish and reportedly Bavarian counterparts are of the opposite view.” [emphasis added] —Nadezhda Purtova, International Data Privacy Law, 2022, Vol 12, No. 3, From knowing by name to targeting: the meaning of identification under the GDPR
Final comments
12 years ago, I wrote, “Who gets your information when you register at an event?” The following year, I wrote, “Whom is your event for; the organizers or the attendees?” It’s revealing that those who are in favor of facial analysis technology are the technology suppliers and show owners. Those who are critical of it are attendees.
There is no win-win here. What’s good for show owners and the suppliers whose services they buy is bad for attendee privacy and openness. Show owners are using facial analysis with zero notification. And if attendees are told in advance that their faces will be analyzed, they may be deterred from attending such events or expressing their opinions freely. Or they may have no choice but to attend for business reasons without the option of consenting or opting out.
I don’t see how facial analysis technology can address these concerns. We should worry when Moutafis says that Zenus addresses them when in reality they don’t. That’s why I agree with Kamprath when he says You Shouldn’t Use Facial Analysis At Your Event.
The meeting industry has an ethical responsibility to do the right thing.
Just because you can do something, doesn’t mean you should.
P.S. And wait, there’s more! This epic isn’t over! Panos Moutafis, the CEO of Zenus, responded to this post, and I’ve shared my response to his in this post.
At edACCESS 2008 I gave a 90-minute presentation entitled “Learning from the biggest consulting mistake I’ve made — and that you probably have too”.
OK, the formal title was “The Systematic Development of Informed Consent“, which sounds much fancier but requires explanation.
17 years have passed, yet I think the blunders I made while working with a client during one of my past careers—IT consulting—are still relevant and instructive. So, I’m going to ‘fess up to the world. And as a bonus, I’ll introduce you to the people who taught me the biggest reason worthy projects don’t get implemented, and what you can do about it.
The story begins
The story begins in November 2007, when I was invited to a two-day training given by Hans and Annemarie Bleiker. There were about forty of us. Here is a photo of our merry group.
While teaching at MIT in the 1960s, Hans and Annemarie noticed the dismaying reality that many public projects never get implemented or even started. They decided to research to find out why, and if there was anything people could do to improve their chances of success. She’s an anthropologist, he’s an engineer. Since then, they have presented their findings and unique methods for improving matters to more than forty thousand professionals around the U.S. Here are some of their clients…
[Click on the image for the current list.]…and their mission.
Mid-morning on the first day of the workshop I had a major aha! moment. I understood a core mistake I’d made eighteen years earlier. That mistake led to my failure to successfully implement an organization-wide IT system for a major client.
During the workshop, I discovered that the mistake is so common that the Bleikers have spent decades teaching people how to avoid it.
So, I want to share what I learned with you because you have probably already made the same mistake.
My Harrowing Story
The following story is mostly true, though I’ve changed all names to confuse the innocent.
In July 1989, I was hired by a client I’ll call Seagull School. The school had two campuses, North and South, that were three miles apart and housed slightly different academic programs. The key personnel I worked with were Mr. Head, Mr. South (head of the main campus), Mr. North, and the Tech Director.
From 1989 – 1998, I wrote custom software or adapted commercial software for Seagull’s administrative needs. It was all hosted at South. South’s computer labs included both PCs and Macs; North decided to only use Macs. At the time, I didn’t think much about it.
In 1999, I was asked to develop an integrated administrative system that would eventually be used at both campuses. It took about a year to develop. During the development, North was asked repeatedly to define what system functionality they would like, but they didn’t want to talk about specific data elements. Over the next couple of years, it slowly became clear that they wanted something that could be changed on a whim. North wouldn’t consider the ramifications for the whole school. For example, North wanted the school registrar, based at South, to create transcripts, but wouldn’t specify what might be on them for North’s programs.
Finally!
In 2001, Mr. Head decreed at a meeting with all the administrators that the system I’d developed should be used at both campuses. Yay!
But…no.
A few days later, Mr. Head called me into his office. He had just met with Mr. North who had presented him with a large packet of documents expressing his view of the current state of affairs. Mr. North claimed that the integrated system solution had been developed without talking to people at North. So, he had just purchased another system from a neighboring school (without talking to anyone at South). He told Mr. Head that he thought Seagull School should use North’s system for both campuses and have the existing integrated system be an archive of past data.
Everyone at South whom I talked to thought this was ridiculous.
However, for some reason that I was never made privy to, Mr. Head left that meeting feeling it would be impossible to make my integrated system a viable solution for Seagull School right now. So, Mr. Head told me to keep the folks at South campus happy and leave North to its own devices for the present.
Well, what about…this?
A year went by and I had a bright idea. Why not develop a web-based system that would be platform-independent? I gave the Tech Director a quote, but the school decided it was too expensive.
North decided to hire its own consultant to develop a custom system. As I expected, the consultant didn’t do much because he was incapable of pinning North down to say what they wanted.
By 2003, Seagull administrative staff at South were complaining that they couldn’t do the work that North wanted them to do because North’s data was still in a separate system.
So, Mr. Head hired two more consultants to advise on what Seagull School should do. Eventually, the second consultant concluded that the “strongly recommended” scenario was to use my system, with North accessing it via remote control software. The next best option was to develop the web-based system I’d recommended. The third option “difficult to justify”: was to keep using two systems.
For another year, Mr. North ignored the report, and Mr. Head did nothing.
Finally, in July 2004, Seagull asked me to create a web-based system.
I told them, “No, I’m retiring from IT consulting in a couple of years, and I don’t want to start a new project for you now.” <Muttering under my breath: “You should have said ‘yes’ two years ago when I suggested it – I would have done it then.”>
More dramatic twists and turns ensued, which I will spare you because they aren’t germane to the topic of this post. I’ll just add that Seagull School kept using my system for another five years.
So, what went right?
As a fan of Appreciative Inquiry, I think it’s important to spend a moment summarizing what I did well for Seagull School.
I successfully devised, created, updated, and supported easy-to-use custom software that handled the core administrative needs of Seagull School for almost twenty years.
The core Seagull School staff, based at South, appreciated my work and were strongly supportive during this time.
The investigations of several other independent consultants upheld my recommendations.
So, what went wrong?
I was unable to get Seagull School to adopt a single integrated administrative system for both North and South.
You might ask: “Why did I fail?” But “Why” questions are not especially useful in cases like this.
A better question is: “What could I have done differently?”
I’ll answer this question after telling a fairy tale…
The Fairy Tale
Once upon a time, there was a baby princess, born into wealth and privilege. Everyone who’s anyone was invited to her christening.
Unfortunately, the invitation email sent to a wicked fairy with an AOL account bounced back to the palace mail server, and the bounce never made it through the palace spam filter.
You know what happened next. Although guarded carefully, the princess, grown to a young woman was one day accidentally tased by a palace security guard.
Nothing would wake her.
She had to sleep for a hundred years with her crown on until tech support finally showed up and rebooted her.
The Wisdom of The Bleikers
So now we arrive at The Wisdom of The Bleikers. Here’s their answer to the question “What could I have done differently?” It was the following explanation that provided my aha! moment halfway through the first day of the Bleiker workshop.
Setting the stage
You’re trying to implement a Good Thing for a constituency. It could be a new water treatment plant for a town, a program to reduce the number of unhoused, or—dare I say—the adoption of a single organization-wide administrative information system.
When we do this, invariably some folks are against our Good Thing. Our constituency is divided.
[An important caveat: The Wisdom of the Bleikers is not a panacea for developing consent for a poorly thought-out plan or proposal.]
The Bleikers’ research found that just about everyone thinks of a divided constituency they’re working with like this:
The Bleikers reframe this common view in the context of a scale of agreement, like this:
The key Bleiker addition that the above diagram omits.
Almost every major constituency faced with a significant change includes NIMBYs (“Not In My Backyard” aka “Over My Dead Body”) who, even if they are a small minority, have a great deal of power to torpedo implementation of the Good Thing.
Mr. North was my NIMBY. And, as I’ve related, he succeeded in preventing the implementation of a single administrative IT system during my entire consulting gig at Seagull School. The Bleikers have found that the single most effective way to improve the chance of implementing the Good Thing is to focus on the NIMBYs. And the heart of the Bleiker strategy is to move NIMBYs to 0+%.
The Bleikers have found that this strategy works. Though it’s not 100% guaranteed, they have successfully helped hundreds of organizations to implement complex projects despite the existence of considerable NIMBY opposition.
Why don’t people follow the Bleiker strategy?
Why didn’t I talk to Mr. North as soon as I started to realize that not all was well?
Fear.
Remember that everyone at South who worked with me was very happy with my work. It was easy for me to hang out with the folks at South and join them in complaining about how unreasonable the folks at North were. It would have been scary to go and listen to Mr. North. I felt scared to hear what they might have to say. So, I played it safe. For years.
It’s really easy to hang out with the folks that agree with you. It’s hard to go into the lions’ den and talk with people who are highly opposed to what you, and perhaps a majority of a constituency, think should happen.
My mistake was to focus on developing support at South for a single administrative system at both campuses, rather than developing what the Bleikers call Informed Consent at North. I never really thought about who might be affected by my work. If I had, I might have realized that I needed to spend a lot more time listening to Mr. North. If I had successfully implemented what the Bleikers eventually taught me, Seagull School might have had a single administrative system by 1999, instead of nine years of countless meetings, expensive outside consultants, and school-wide frustration.
This was my biggest consulting mistake. (That I’m aware of.)
Informed Consent, and an introduction to what you need to do to move NIMBYs to 0+%
The Bleikers identify three kinds of consent:
Informed
Uninformed; and
Misinformed
And they define Informed Consent as the grudging willingness of opponents to go along with a course of action they are opposed to…”
So, if you can develop Informed Consent, you can get your proposal implemented!
You can become what the Bleikers call an “Implementation Genius”!
Implementation Geniuses:
Don’t concentrate on developing support for their proposals
Focus their efforts primarily on the bottom of the Agreement scale
Aim to develop their fiercest opponents Informed Consent
The Bleikers spend most of their workshops teaching how to develop the Informed Consent of NIMBYs. I’m not going to try here to reiterate or summarize what they teach. I recommend you go to their workshops for that! But I want to end with five Bleiker “pearls” that give you a taste of what to expect.
Pearl 1. Why versus What
Telling your constituency:
WHY you exist…
WHY you do what you do…
…is ten times more important than just telling them WHAT you do.
Pearl 2. The mission is not the mission statement
Your mission is a bunch of responsibilities. It resides in people’s guts.
Your mission statement is a bunch of words, a verbal sketch of the mission, but just a sketch.
You need many different mission statements, some long, some short, some technical, some non-technical – but many, many…
Pearl 3. The Bleiker “Life-Preserver”
Repeat often!
“There really is a problem.”
“We are the right entity to be addressing this problem; in fact, given our responsibility, it would be irresponsible for us not to address it.”
“Our approach is reasonable, sensible, and responsible.”
“We do listen, we do care.”
Don’t say “we want to” or “we would like to”.
Say “we need to do this!” or “we owe it to you”.
Pearl 4. The Null-Alternative
The Null-Alternative is the sequence of events that, most likely, will come to pass if you don’t implement a workable solution.
It is the consequence of your failure to implement a workable solution.
I titled this post “Learning from my biggest consulting mistake”. There aren’t really any dumb mistakes. Mistakes are integral to learning. They only become dumb if you don’t learn from them and consequently repeat them over and over again.
Have you ever avoided people who have the potential to torpedo important work because you feel scared of what might happen if you do?
I have, and I believe such behavior is understandable and, unfortunately, common.
I hope that by sharing my story and the Bleiker approach to developing Informed Consent with you, you learn how our natural unwillingness to listen to those who vehemently oppose something we think is a Good Thing can be overcome.
To your and your constituency’s benefit.
Has something like this happened to you? Please share your stories, experiences, and thoughts about anything in this post in the comments below!
Image attribution: – Illustration of The Sleeping Beauty by Ruth Ives from Wonder Books’ “Sleeping Beauty” by Evelyn Andreas, Copyright 1956.
Imagine a group of people who need to make a decision about something. As the size of the group increases, the chance that everyone will be happy with what is decided falls exponentially. Unless there’s unanimous agreement, the group will use — either explicitly or tacitly — some kind of rule that determines whether a specific decision is acceptable. Groups often use tacit rules when the consequences of the decision are minor.
“Harry, you feel strongly we should do this but Kerrie & I don’t care either way, so let’s go with your approach.”
Or, when “consensus” is only a pretense. “Well, I think we should do this. Any objections? OK, we’ve decided.”
So, you may wonder, if a group wants consensus, where consensus is defined by an explicit decision rule, then what decision rule should be used?
Danger, Will Robinson!
The moment we start trying to define consensus with a rule that tells us whether we have it or not, we diverge from the core reason to seek consensus.
The value of consensus is in the process of seeking it. Not a “yes, we have consensus!” outcome as defined by a decision rule.
There is no magic formula that will create the maximum likelihood that we will be able to obtain consensus. However we define it.
Informed Consent
The best we can strive for is what Hans, Annemarie & Jennifer Bleiker, who have trained over 30,000 public-sector professionals over the last 40 years, call Informed Consent, which they define as follows:
Informed Consent is the grudging willingness of opponents to (grudgingly) “go along” with a course of action that they — actually — are opposed to.
The concept of consensus becomes dangerous when we use process that forces a fake “consensus” outcome on a group. An example of this is the 2-4-8 consensus:
2, 4, 8 consensus is an excellent tool for prioritising in large groups. This exercise will take time, but will help a group reach a decision that everyone can live with! It’s usually best to impose tight time limits at every stage of this discussion!
Draw up a list of proposals in the whole group.
Form pairs. Each pair discusses the list of possible proposals and is asked to agree their top 3 priorities (it could be any number, but for this example we’ll use 3).
Each pair then comes together with another, to form a group of 4. The 2 pairs compare their lists of top 3 priorities and, after discussion, agree on a joint top 3.
Each group of 4 comes together with another to form a group of 8. Again, each group takes its 2 lists of priorities and reduces it to one list of 3.
Repeat until the whole group has come back together and has a shared list of just 3 priorities.
There is nothing wrong with using a decision process like this to pick top priorities in a group. But picking a group’s top priorities is not the same as reaching consensus.
Seeking consensus, however you define it, is difficult for large groups. Techniques like Roman voting can help us determine how close we are to informed consent and can pinpoint who cannot go along with a proposed decision and why. The journey towards informed consent is what we should concentrate on if we are to reach a “consensus” that everyone can live with.