Aug. 30, 2021

Compliance: at the moment

📧 Compliance and Ethics. Technologies may be inacceptable “in themselves” and designing their “ethical use” is therefore not acceptable: practical case on the control of workers’ emotions

by Marie-Anne Frison-Roche

ComplianceTech® ↗️ pour lire ce billet en français, cliquer sur le drapeau français

â–ş An article from March 3, 2021, Smile for the camera: the dark side of China's emotion-recognition tech, then an article from June 16, 2021, "Every smile you fake" - an AI emotion - recognition system can assess how "happy" China's workers are in the office describes how a new technology of emotional recognition is able, through what will soon be out of fashion to call "facial recognition", to distinguish a smile that reflects a mind state of real satisfaction from a smile which does not correspond to it. This allows the employer to measure the suitability of the human being for his or her work. It is promised that it will be used in an ethical way, to improve well-being at work. But isn't it in itself that this technology is incompatible with any compensation through ethical support?

The technology developed by a Chinese technology company and acquired by other Chinese companies with many employees, allows to have information on the actual state of mind of the person through and beyond his or her facial expressions and bodily behavior.

Previously, the technology of emotional recognition had been developed to ensure security, by fighting against people with hostile plans, public authorities using it for example in the controls at airports to detect the criminal plans which some passengers could have.

It is now affirmed that it is not about fighting against some evil people ("dangerousness") to protect the group before the act is committed ("social defense”) but that it is about helping all workers.

Indeed, the use that will be made of it will be ethical, because first the people who work for these Chinese companies with global activity, like HuawaĂŻ, do it freely and have accepted the operation of these artificial intelligence tools (which is not the case with people who travel, control being then a kind of necessary evil that they do not have to accept, which is imposed on them for the protection of the group), but even and above all, the purpose is itself ethical: if it turns out that the person does not feel well at work, that they are not happy there, even before they are perhaps aware, the company can assist.

Let’s take this practical case from the perspective of Law and let’s imagine that it is contested before a judge applying the principles of Western Law.

Would this be acceptable?

No, and for three reasons.

1. An "ethical use" cannot justify an unethical process in itself

2. The first freedoms are negative

3. "Consent" should not be the only principle governing the technological and digital space

 

I. AN "ETHICAL USE" CAN NEVER LEGITIMATE AN UNETHICAL PROCESS IN ITSELF

These unethical processes in themselves cannot be made "acceptable" by an "ethical use" which will be made of them.

This principle was especially reminded by Sylviane Agacinski in bioethics: if one cannot dispose of another through a disposition of his or her body which makes his or her very person available (see not. Agacinski, S., âžˇď¸Źđź“—Le tiers-corps. RĂ©flexions sur le don d’organes, 2018).

Except to make the person reduced to the thing that his or her body is, which is not ethically admissible in itself, that is excluded, and Law is there in order to this is not possible.

This is even why the legal notion of "person", which is not a notion that goes without saying, which is a notion built by Western thought, acts as a bulwark so that human beings cannot be fully available to others, for example by placing their bodies on the market (see Frison-Roche, M.-A., âžˇď¸Źđź“ťTo protect human beings, the ethical imperative of the legal notion of person, 2018). This is why, for example, as Sylviane Agacinski emphasizes, there is no ethical slavery (a slave who cannot be beaten, who must be well fed, etc.).

That the human being agrees ("and what about if it pleases me to be beaten?") does not change anything.

 

II. THE FIRST FREEDOM IS THE ONE TO SAY NO, FOR EXAMPLE BY REFUSING TO REVEAL YOUR EMOTIONS: FOR EXAMPLE HIDING IF YOU ARE HAPPY OR NOT TO WORK

The first freedom is not positive (being free to say Yes); it is negative (being free to say No). For example, the freedom of marriage is having the freedom not to marry before having the freedom to marry: if one does not have the freedom not to marry, then the freedom to marry loses any value. Likewise, the freedom to contract implies the freedom not to contract, etc.

Thus, freedom in the company can take the form of freedom of speech, which allows people, according to procedures established by Law, to express their emotions, for example their anger or their disapproval, through the strike.

But this freedom of speech, which is a positive freedom, has no value unless the worker has the fundamental freedom not to express his or her emotions. For example if he or she is not happy with his or her job, because he or she does not appreciate what he or she does, or he or she does not like the place where he or she works, or he or she does not like people with whom he or she works, his or her freedom of speech demands that he or she have the right not to express it.

If the employer has a tool that allows him or her to obtain information about what the worker likes and dislikes, then the employee loses this first freedom.

In the Western legal order, we must be able to consider that it is at the constitutional level that the infringement is carried out through Law of Persons (on the intimacy between the Law of Persons and the Constitutional Law, see Marais , A., âžˇď¸Źđź“•Le Droit des personnes, 2021).

 

III. CONSENT SHOULD NOT BE THE ONLY PRINCIPLE GOVERNING THE TECHNOLOGICAL AND DIGITAL SPACE

 

We could consider that the case of the company is different from the case of the controls operated by the State for the monitoring of airports, because in the first case observed people are consenting.

"Consent" is today the central notion, often presented as the future of what everyone wants: the "regulation" of technology, especially when it takes the form of algorithms ("artificial intelligence"), especially in digital space.

"Consent" would allow "ethical use" and could establish the whole (on these issues, see Frison-Roche, M.-A., âžˇď¸Źđź“ťHaving a good behavior in the digital space, 2019).

"Consent" is a notion from which Law is today moving away in Law of Persons, in particular as regards the "consent" given by adolescents on the availability of their body, but not yet on digital.

No doubt because in Contract Law, "consent" is almost synonymous with "free will", whereas they must be distinguished (see Frison-Roche, M.-A., âžˇď¸Źđź“ťRemarques sur la distinction entre la volontĂ© et le consentement en Droit des contrats, 1995).

But we see through this case, which precisely takes place in China, that "consent" is in Law as elsewhere a sign of submission. It is only in a probative way that it can constitute proof of a free will; this proof must not turn into an irrebuttable presumption.

The Data Regulatory Authorities (for example in France the CNIL) seek to reconstitute this probative link between "consent" and "freedom to say No" so that technology does not allow by "mechanical consents", cut off from any connection with the principle of freedom which protects human beings, from dispossessing themselves (see Frison-Roche, M.-A., Yes to the principle of will, No to pure consents, 2018).

The more the notion of consent will be peripheral, the more human beings will be able to be active and protected.

________

comments are disabled for this article