June 23, 2022
Thesaurus : Doctrine
► Full Reference: Augagneur, L.-M., Le traitement réputationnel par et sur les plateformes, in Frison-Roche, M.-A. (ed.), La juridictionnalisation de la Compliance, series "Régulations & Compliance", Journal of Regulation & Compliance (JoRC) and Dalloz, to be published.
► Article Summary (done by the author): The large platforms are in the position of arbiter of the reputation economy (referencing, notoriety) in which they themselves act. Although the stakes are usually low on a unit basis, the jurisdiction of reputation represents significant aggregate stakes. Platforms are thus led to detect and assess reputation manipulations (by users: SEO, fake reviews, fake followers; or by the platforms themselves as highlighted by the Google Shopping decision issued by the European Commission in 2017) that are implemented on a large scale with algorithmic tools.
The identification and treatment of manipulations is itself only possible by means of artificial intelligence tools. Google thus proceeds with an automated downgrading mechanism for sites that do not follow its guidelines, with the possibility of requesting a review through a very summary procedure entirely conducted by an algorithm. Tripadvisor, on the other hand, uses an algorithm to detect false reviews based on "fraud modeling to identify electronic patterns that cannot be detected by the human eye". It only conducts a human investigation in limited cases.
This jurisdictionality of reputation has little in common with that defined by the jurisprudence of the Court of Justice (legal origin, contradictory procedure, independence, application of the Rules of Law). It is characterized, on the one hand, by the absence of transparency of the rules and even of the existence of rules stated in predicative form and applied by deductive reasoning. It is replaced by an inductive probabilistic model by the identification of abnormal behaviors in relation to centroids. This approach of course raises the issue of statistical bias. More fundamentally, it reflects a transition from Rule of Law, not so much to "Code is Law" (Laurence Lessig), but to "Data is Law", that is, to a governance of numbers (rather than "by" numbers). It also comes back to a form of collective jurisdictionality, since the sanction comes from a computational apprehension of the phenomena of the multitude and not from an individual appreciation. Finally, it appears particularly consubstantial with compliance, since it is based on a teleological approach (the search for a finality rather than the application of principles).
On the other hand, this jurisdictionality is characterized by man-machine cooperation, whether in the decision-making process (which poses the problem of automaticity bias) or in the contradictory procedure (which poses, in particular, the problems of discussion with the machine and the explicability of the machine response).
Until now, the supervision of these processes has been based essentially on the mechanisms of transparency, a limited adversarial requirement and the accessibility of appeal channels. The French Law Loi pour une République Numérique ("Law for a Digital Republic"), the European Legislation Platform-to-Business Regulation and the Omnibus Directive, have thus set requirements on the ranking criteria on platforms. The Omnibus Directive also requires that professionals guarantee that reviews come from consumers through reasonable and proportionate measures. As for the European Digital Services Act, it provides for transparency on content moderation rules, procedures and algorithms. But this transparency is often a sham. In the same way and for the moment the requirements of sufficient human intervention and adversarial processes appear very limited in the draft text.
The most efficient forms of this jurisdictionality ultimately emerge from the role played by third parties in a form of participatory dispute resolution. Thus, for example, FakeSpot detects false Tripadvisor reviews, Sistrix establishes a ranking index that helped establish the manipulation of Google's algorithm in the Google Shopping case by detecting artifacts based on algorithm changes. Moreover, the draft Digital Services Act envisages recognizing a specific status for trusted flaggers who identify illegal content on platforms.
This singular jurisdictional configuration (judge and party platform, massive situations, algorithmic systems for handling manipulations) thus leads us to reconsider the grammar of the jurisdictional process and its characteristics. If Law is a language (Alain Sériaux), it offers a new grammatical form that would be that of the middle way (mesotès) described by Benevéniste. Between the active and the passive way, there is a way in which the subject carries out an action in which he includes himself. Now, it is the very nature of this jurisdictionality of compliance to make laws by including oneself in them (nomos tithestai). In this respect, the irruption of artificial intelligence in this jurisdictional treatment undoubtedly bears witness to the renewal of the language of Law.
Aug. 30, 2021
Compliance: at the moment
► An article from March 3, 2021, Smile for the camera: the dark side of China's emotion-recognition tech, then an article from June 16, 2021, "Every smile you fake" - an AI emotion - recognition system can assess how "happy" China's workers are in the office describes how a new technology of emotional recognition is able, through what will soon be out of fashion to call "facial recognition", to distinguish a smile that reflects a mind state of real satisfaction from a smile which does not correspond to it. This allows the employer to measure the suitability of the human being for his or her work. It is promised that it will be used in an ethical way, to improve well-being at work. But isn't it in itself that this technology is incompatible with any compensation through ethical support?
The technology developed by a Chinese technology company and acquired by other Chinese companies with many employees, allows to have information on the actual state of mind of the person through and beyond his or her facial expressions and bodily behavior.
Previously, the technology of emotional recognition had been developed to ensure security, by fighting against people with hostile plans, public authorities using it for example in the controls at airports to detect the criminal plans which some passengers could have.
It is now affirmed that it is not about fighting against some evil people ("dangerousness") to protect the group before the act is committed ("social defense”) but that it is about helping all workers.
Indeed, the use that will be made of it will be ethical, because first the people who work for these Chinese companies with global activity, like Huawaï, do it freely and have accepted the operation of these artificial intelligence tools (which is not the case with people who travel, control being then a kind of necessary evil that they do not have to accept, which is imposed on them for the protection of the group), but even and above all, the purpose is itself ethical: if it turns out that the person does not feel well at work, that they are not happy there, even before they are perhaps aware, the company can assist.
Let’s take this practical case from the perspective of Law and let’s imagine that it is contested before a judge applying the principles of Western Law.
Would this be acceptable?
No, and for three reasons.
1. An "ethical use" cannot justify an unethical process in itself
2. The first freedoms are negative
3. "Consent" should not be the only principle governing the technological and digital space
I. AN "ETHICAL USE" CAN NEVER LEGITIMATE AN UNETHICAL PROCESS IN ITSELF
These unethical processes in themselves cannot be made "acceptable" by an "ethical use" which will be made of them.
This principle was especially reminded by Sylviane Agacinski in bioethics: if one cannot dispose of another through a disposition of his or her body which makes his or her very person available (see not. Agacinski, S., ➡️📗Le tiers-corps. Réflexions sur le don d’organes, 2018).
Except to make the person reduced to the thing that his or her body is, which is not ethically admissible in itself, that is excluded, and Law is there in order to this is not possible.
This is even why the legal notion of "person", which is not a notion that goes without saying, which is a notion built by Western thought, acts as a bulwark so that human beings cannot be fully available to others, for example by placing their bodies on the market (see Frison-Roche, M.-A., ➡️📝To protect human beings, the ethical imperative of the legal notion of person, 2018). This is why, for example, as Sylviane Agacinski emphasizes, there is no ethical slavery (a slave who cannot be beaten, who must be well fed, etc.).
That the human being agrees ("and what about if it pleases me to be beaten?") does not change anything.
II. THE FIRST FREEDOM IS THE ONE TO SAY NO, FOR EXAMPLE BY REFUSING TO REVEAL YOUR EMOTIONS: FOR EXAMPLE HIDING IF YOU ARE HAPPY OR NOT TO WORK
The first freedom is not positive (being free to say Yes); it is negative (being free to say No). For example, the freedom of marriage is having the freedom not to marry before having the freedom to marry: if one does not have the freedom not to marry, then the freedom to marry loses any value. Likewise, the freedom to contract implies the freedom not to contract, etc.
Thus, freedom in the company can take the form of freedom of speech, which allows people, according to procedures established by Law, to express their emotions, for example their anger or their disapproval, through the strike.
But this freedom of speech, which is a positive freedom, has no value unless the worker has the fundamental freedom not to express his or her emotions. For example if he or she is not happy with his or her job, because he or she does not appreciate what he or she does, or he or she does not like the place where he or she works, or he or she does not like people with whom he or she works, his or her freedom of speech demands that he or she have the right not to express it.
If the employer has a tool that allows him or her to obtain information about what the worker likes and dislikes, then the employee loses this first freedom.
In the Western legal order, we must be able to consider that it is at the constitutional level that the infringement is carried out through Law of Persons (on the intimacy between the Law of Persons and the Constitutional Law, see Marais , A., ➡️📕Le Droit des personnes, 2021).
III. CONSENT SHOULD NOT BE THE ONLY PRINCIPLE GOVERNING THE TECHNOLOGICAL AND DIGITAL SPACE
We could consider that the case of the company is different from the case of the controls operated by the State for the monitoring of airports, because in the first case observed people are consenting.
"Consent" is today the central notion, often presented as the future of what everyone wants: the "regulation" of technology, especially when it takes the form of algorithms ("artificial intelligence"), especially in digital space.
"Consent" would allow "ethical use" and could establish the whole (on these issues, see Frison-Roche, M.-A., ➡️📝Having a good behavior in the digital space, 2019).
"Consent" is a notion from which Law is today moving away in Law of Persons, in particular as regards the "consent" given by adolescents on the availability of their body, but not yet on digital.
No doubt because in Contract Law, "consent" is almost synonymous with "free will", whereas they must be distinguished (see Frison-Roche, M.-A., ➡️📝Remarques sur la distinction entre la volonté et le consentement en Droit des contrats, 1995).
But we see through this case, which precisely takes place in China, that "consent" is in Law as elsewhere a sign of submission. It is only in a probative way that it can constitute proof of a free will; this proof must not turn into an irrebuttable presumption.
The Data Regulatory Authorities (for example in France the CNIL) seek to reconstitute this probative link between "consent" and "freedom to say No" so that technology does not allow by "mechanical consents", cut off from any connection with the principle of freedom which protects human beings, from dispossessing themselves (see Frison-Roche, M.-A., Yes to the principle of will, No to pure consents, 2018).
The more the notion of consent will be peripheral, the more human beings will be able to be active and protected.
April 21, 2021
Thesaurus : Doctrine
Summary of the article (by Marie-Anne Frison-Roche)
After having wondered about the relationship between Law and Morality, for which it is difficult to find points of contact, the author advances the hypothesis that the latter could find a space of concretization in the technology of artificial intelligence, even though many are worried about the deleterious effects of it. The author considering that Compliance is only a method while ethics would be the way in which morality is incorporated in a relaxed way in Law, the technology known as Artificial Intelligence could therefore express the moral rule ("compliance by design could be the appropriate tool to ensure the effectiveness of moral rules without falling into the excesses envisaged").
The author draws on examples to estimate that thus technology for on the one hand expressing the moral rule and on the other hand making it effective. The moral rule can thus be drawn up in a balanced way since it is jointly developed between the State and the economic operators, this collaboration taking the form of general principles adopted by the State using the means chosen by the company. Its content would also be characterized by the search for a "right balance", which would be found by this distribution between the primary moral principles whose expression would be the act of the State and the secondary moral principles whose expression would be delegated to companies.
Taking therefore what would be the principles of Compliance, the author applies them to Artificial Intelligence, showing that these technologies include not only the principle of neutrality but also the ethical principles of non-maliciousness, even of benevolence. (first principles) that companies then decline into secondary principles. Therefore, "compliance can usefully be used to convert these fundamental moral principles into derived moral rules, a source of greater effectiveness.".
Thus resulting in a "moral by design", the overall system has an additional effectiveness tool. This supposes that the fundamental and derived rules are of an acquired moral quality because for the moment the technological tool can only ensure their effectiveness and not the moral quality of the implemented rules. In determining the "moral rules of application", the company has margins of freedom, used through technological tools.
April 1, 2020
Thesaurus : Doctrine
Jan. 5, 2020
Thesaurus : Doctrine
Full reference: Adam, P., Le Friant, M. et Tarasewicz, Y. (ed.), Intelligence artificielle, gestion algorithmique du personnel et droit du travail (written in French), serie "Les travaux de l'AFDT", Coll. "Thèmes et Commentaires", Dalloz, 2020, 241p.
Dec. 24, 2019
MAFR TV : MAFR TV - case
French Financial Markets Authority decision of 11 December 2019 sanctioning the Bloomberg press organ for disseminating false information on the financial market: here, the ethical obligation converges with Financial Law
Watch the video commenting on the decision of the Commission des sanctions of the Autorité des marchés financiers - AMF (French Financial Market Authority Sanctions Commission).
Read the decision.
In 2015, a document supposedly emanating from the Vinci company reached the Bloomberg media announcing unexpected catastrophic results. The two journalists who received it immediately published it without checking anything, the Vinci listed shares losing more than 18%. It was a rude forgery, which a basic check would have established, a check which the journalists had not done.
4 years later, the Bloomerg company is punished for the breach to "disseminate false information" on the financial market, by a decision of the Sanctions Commission of the Autorité des Marchés Financiers (French Financial Markets Authority) of December 11, 2019.
The company being sued argued that it was up to journalists to be accountable and not to itself, because on the contrary the firm had implemented both detection software and a code of conduct, even though there was no legal rule constraining it. In consequence, it would not possible to pursue it.
But the AMF Sanctions Commission stresses that, independently of this, it is a general rule of ethics for journalists that obliges them to verify the authenticity of the documents they publish, which they did not, whereas an elementary check would have allowed them to measure that it is a rude forgery.
In addition, the Sanctions Commission refers to the European Regulation on market abuses which in its article 21 targets the special status to be reserved for press freedom and the special status of journalists, but associates this ethical obligation to verify documents . However, the Sanctions Commission notes that this obligation, which was targeted by both the journalists' ethics and the reference text of Financial Law, was completely ignored by the two journalists. It is therefore up to the press agency to be accountable and to be punished.
However, the media entreprise maintained that the balance between the principle of freedom of the press and the principle of freedom of opinion on the one hand and the principle of the protection of the financial market and of investors against false information disseminated requires an interpretation of the European Union Law, which must oblige the Sanctions Commission to ask a preliminary question to the Court of Justice of the European Union.
The Sanctions Commission dismisses this request because it considers that the European texts are "clear", which allows the Sanctions Commission to interpret them itself. And precisely the European Regulation on market abuse in its article 21 provides for the exception in favor of the press and journalists but compels them to respect their ethics, in particular the verification of the authenticity of documents. In this case, they did nothing. They are clearly the authors of a breach attributable to the company.
In a less clear case, one could consider that this balance between two principles, both of public interest, is delicate and that an interpretation by the Court of Justice would always be useful.
Indeed and more fundamentally, does Financial Law remain an autonomous Law, putting first the objective of the preservation of the integration of the financial market and the protection of investors or is it the advanced point of an Information Law protecting everyone against the action of any "influencer" (category to which Bloomberg belongs) consisting in disseminating inaccurate information (notion of "misinformation")?
And that is not so "clear" ....
April 21, 2017
Through the Open Culture website, it is possible to listen to Hayao Miyazaki who, in March 2017, claimed that video games whose drawings are made on Artificial Intelligence basis are "insults to life".
Read below the history, the words that the Master has held, his conception of what is creation and "truly human" work, which is echoed by the definitions given by Alain Supiot, who also reflected on what robots do.
This brings us back to the very notion of "creation" and creative work.