I was reading articles about the Digital Service Act and the more I read about this regulation, the more I was remembering the Ethical Considerations of the Spanish Charter of Digital Rights.
The Ethical Considerations group consisted of: Carissa Véliz, Paloma Llaneza, José Luis Piñar, Instituto Hermes, Simona Levi, Miguel Pérez Subías, and led by myself.
It is a pity that this document has not been given more visibility because it is a very good one. In the drafting of some principles we had a lot of vision, and I would like to name them and make some comments on some of them.
a. Principle of the centrality of the human person
“The person, the human person, is the center and subject of Fundamental Rights, both in the physical and digital environment. Technological innovation can call into question something as indisputable as the fact that the person and only the person is the centre and subject of rights.
Realities such as the ecology of technologies, including biological convergent technologies (bio), information (info), nanoscale (nano) and cognitive (cogno) technologies, all labelled together as BINC technologies, need to be carefully observed by the law, which cannot ignore the issues they raise nor can it delay their legal analysis, given the experience of rapid changes in technology.”
Before going on, I want to make a couple of comments. What is remarkable here? There is nothing more ethical than putting the human person at the center and “that the person and only the person is the center and the subject of rights.” It seems obvious, but it is not.
Precisely, in the field of tech platforms the individuals’ Fundamental Rights are put aside for the sake of the rights of the platforms themselves. (This connects with the Principle of Preponderance of Fundamental Rights that follows).
I also want to emphasize the concept that we included in this principle: BINC technologies, or “The BINC Ecology” which includes all technologies, and not only the most popular ones, such as AI, Machine Learning, Internet of Things, etc. In this video Dr. Steen Rasmussen and I explain what BINC Technologies are in detail.
This means that this principle will apply at current and future disruptive technologies. This principle continues as follows: “… In particular, and as far as task robotization processes are concerned, the European Parliament, in its Resolution of 16 February 2017, with recommendations to the Commission on Civil Law rules on Robotics (2015/2103(INL)) warns that “it is of vital importance that the legislator weighs up the legal and ethical consequences, without thereby hindering innovation”.
In other words, innovation in the European territory is key and should not be hindered as long as it does not violate the Fundamental Rights of individuals.
“The Law must articulate mechanisms and legal solutions that address, for example, the cases in which the demand for liability for the results that may give rise to a high individual and/or collective risk produced by the actions of robots or other machines that use Artificial Intelligence, or any other upcoming technology, may arise, but this cannot derive in extending to them the recognition of rights and duties that only correspond to the person.”
“The Law must articulate mechanisms and legal solutions that address, for example, the cases in which the demand for liability for the results that may give rise to a high individual and/or collective risk produced by the actions of robots or other machines that use Artificial Intelligence, or any other upcoming technology, may arise, but this cannot derive in extending to them the recognition of rights and duties that only correspond to the person.”
This paragraph makes it very clear that the human person is the only one who corresponds the recognition of rights and responsibilities, and not robots.
This principle ends by saying: “In conclusion, the principle of the centrality of the person is an essential presupposition that, despite its apparent obviousness, must be emphasized in the recognition and development of Digital Rights”.
b. Principle of preponderance of Fundamental Rights.
These principles reinforce the rights of this Charter and the Fundamental Rights, which are the legal framework of application and interpretation.
c. Principle of Universality.
The ethical principle of universality affects Digital Rights, encouraging them to be guaranteed for all persons and in all the sphere in which each of them is defined.
This principle is a reinforcement of the principle of non-discrimination in the application of Digital Rights.
d. Principle of non-exclusion.
“Those who by free will do not access the digital environment, cannot be excluded. “
This is nothing more and nothing less than the SOLUTION to the problems that the elderly and people who, for whatever reason, do not know or cannot interact with technology, are not excluded.
But, what is happening right now? Just the opposite. We are erring on the side of technological solutionism, thinking that technology can fix everything. Is it the arrogance of the technologist, or the short-sightedness of teams formed only by technologists? a bit of everything, I’m afraid. The solution to many of the problems we have lies in the development of social technologies, not just physical technologies.
“This principle refers to the fact that any right that is developed has to include all analog people, attitudes or behaviors, guaranteeing the use and development of goods, services and access to non-digital processes.”
But we should not only think about the elderly and the ATMs issues as the only example of the discrimination caused by technological solutionism and the failure to apply the Principle of Non-Exclusion in the design of a new technology.
Precisely, this is the reason why in our consulting method we start by making a map of the people who will be affected by a particular technology from its design in order to know if the new technology is going to satisfy all individuals’ needs. If it doesn’t, we will have to take into account human interaction as a solution, i.e., analog solutions.
And if not, remember the COVID-19 applications, there was not a single country in the world that designed an effective analog mean for groups at risk of exclusion and discrimination. I had the opportunity to study every country that was implementing technological measures to help tackle the pandemic and there was not a single country that designed an effective analog method for people at risk of exclusion. Everything was focused on technology.
e. Principle of Equality and Inclusiveness in access.
“Access to the digital environment cannot be conditioned either by technological complexity or by options that force people to become customers or users of specific companies.”
Here, two things: a) Government and Public Administrations signing contracts to use tech giants’ products. It is a shame how in Europe we have handed over the privacy of our students in the area of Education to Google Workplace and Microsoft Teams. They turn ALL STUDENTS into forced tech giants’ clients.
Another example are the different Public Administration departments using Zoom in order to be in contact with citizens during the pandemic, and after. And like these examples, many more.
In my post I talk about the Open Source tools the Comunidad Valenciana was using in the Education field Unfortunately, they signed with Microsoft Teams.
As an example, in Spain we have the case of applying to the public assistance of the Social Voucher. A very complicated access. So much so that Civio, a Spanish NGO, designed a tool that helped more than 300,000 people to easily apply for the Social Voucher.
Later, in this post, I will talk about Civio’s complaint asking to release the source code of the algorithm used by the Administration (BOSCO) in charge of designating the Social Voucher.
“Digital technologies must promote equality and try to eliminate unjustified discrimination. All people must be guaranteed equal opportunities.
Information and Communication Technologies (ICTs) cannot become a tool through which, by means of their different techniques, discrimination or exclusion are increased.”
f. Principle of abundance (vs. artificial scarcity)
“Where it is possible to universalise access, any system that fosters abundance should be favoured, avoiding any system that artificially imposes scarcity to obtain some kind of particular benefit-whether to avoid modifying an administrative procedure, to generate extra income or profit without adding services, among others-as long as content creators are adequately compensated, and diversity in the market is fostered.”
In order to avoid that this principle could lead to inequality because it can be interpreted in many ways, such as that there is no right to intellectual property, mention was made of the retribution of the work of creators, and to encourage diversity and against inequality.
g. Beneficence and Non-Maleficence Principle.
“Digital technologies should promote the well-being of people, and should not harm them either intentionally, or through the imposition of risks of individual, collective and/or systemic harm. This makes it necessary to develop standards on security when individuals interact with digital systems, establishing a clear procedure for measuring, testing and certifying the functionality of digital systems.
Many digital technologies affect people’s right to privacy which, in turn, implies vulnerability with respect to a myriad of harms (discrimination, extortion, and numerous external pressures) which puts our autonomy at risk.”
I think this principle makes it very clear what it defends, which is a great deal, and what it proposes so that harm to people does not occur. I do want to emphasise one thing: this principle alludes to the application of technology, and takes into account the CONTEXT of its application.
h. Principle of Autonomy.
“Digital technologies must respect the right of individuals to retain the power to decide what decisions to make, to exercise freedom of choice when necessary, and to cede it in cases where there are justified reasons to renounce this decision-making in a limited and temporary manner. Any delegation should also remain, in principle, voidable.
Public investments in software and knowledge should have maximum public utility, and as far as possible respect the freedom of individuals, by releasing the source code of the Public Administration’s algorithms, or simply facilitating its reuse.”
i. Principle of Auditability.
“Where there is a conflict that affects rights or magnifies existing risks, prejudices and harms in society, digital technologies should be audited by independent third parties and made clearly available to society.
This allows the source code, the objectives for which the algorithm was designed and the functional logic underlying the decisions that may be taken by such processes to be independently inspected so that they can be questioned or subjected to administrative or jurisdictional review.
In turn, this principle makes it possible to identify the person or persons responsible in the event of a negative result that causes harm to persons and that requires, on the one hand, a greater understanding of how such a result was produced and, on the other hand, an explanation of its operation by any person in the administration who is familiar with the system and the specific procedure.
For the sake of the principles of equality and fairness, this principle allows the citizen to know the criteria that have been applied to her or him in a decision making that affects her or him and, with that information, if the decision is fair and in accordance with the law, to take the measures tending to make the decision favourable to her or him, and if it is not, to challenge it judicially or extra-judicially.” If this principle were applicable, what happened to Civio when it went to court asking the Public Administration to release the source code of BOSCO would not have happened, “the software that determines who has access to these grants and who does not, does not work. And we have taken to court the resolution of the Transparency Council that denies us the right to scrutinize its source code.”
You can read the entire article here:
If this principle were applicable, what happened to Civio when it went to court asking the Public Administration to release the source code of BOSCO would not have happened, “the software that determines who has access to these grants and who does not, does not work. And we have taken to court the resolution of the Transparency Council that denies us the right to scrutinise its source code.” You can read the entire article here.
Which reasons did the Court give? Quoting Civio’s original tweet, the reasons are:
1. Danger to public safety
2. Danger for national security
3. Intellectual property
4. Unjustified abuse of the Transparency Law.
And I wonder, if it is software paid with public money shouldn’t it be Open Source? Yes, I know it is a rhetorical question, but opacity is our daily bread. This is just one example. I would like you to read this article about the SyRI ruling to see that things can have a different outcome. This happened in the Netherlands, not in Spain.
j. Principle of Access to Reliable Information.
“The principle of access to reliable information involves access to the source code of the platforms and making sure that different groups of people are separated and mixed (not favouring echo chambers) …”
I am very proud of this principle because it was one of my integral contributions, and one that I still think is fundamental and is slowly becoming a reality: the auditability of large internet platforms.
That the different groups of people are separated and the creation of echo chambers is not favoured. This is nothing other than the elimination of personal profiling by political, religious or philosophical ideological categories, race, sexual preferences, etc. This is Article 9.1 GDPR.
For what purpose? Let’s go to the second part of this principle:
“… so that information is discussed and contrasted allowing people to have sufficient knowledge to make sovereign decisions, achieving greater consistency in information, avoiding polarisation and fragmentation of society.”
In other words, that we can talk and discuss as we did in the past. That bots are exterminated, that there is no manipulation through hyper-segmentation. In short, that our democracy is not fragmented and that people’s Fundamental Rights are respected.
k. Principle of Digital Sovereignty.
“The public authorities shall take care to avoid the risk of excessive dependence on transnational suppliers of digital services, especially to the public administration, to the extent that in particular circumstances national sovereignty may be affected by interruption or inappropriate use of the services.
As far as possible, the development of proprietary technologies shall be encouraged through research, development and innovation policies aimed at a more adequate compliance with the principles expressed in this Charter.”
Little to say to this principle which is perfectly formulated.
This is it. What do you think of these ethical principles? Do you miss any other?