|<–Data for machines||Divide and conquer–>|
AI has generated easy enthusiasms but also fears. As a result, a narrative has developed that sees in the development of AI more the potential for a machine-ruled dystopian future than the possibility to solve long-standing problems afflicting humanity.
It goes without saying that some of the technologies deriving from AI do hold the potential for disruptive transformation of our society – and such a potential must be kept in check if potentially serious problems are to be avoided. It is recent news that the successfully Brexit campaign might have been unlawfully influenced by the ability of the Yes campaign to offer on social media targeted political messages to a restricted demographics of voters liable to be convinced – and whose propensity to accept the desired political message had been determined through an AI recommendation system. Another increasing concerning example are deep fakes for video and textual contents – in particular, the sometimes uncanny generative ability of advanced linguistic models such as GPT-3 holds the potential for ethically questionable outcomes. Early proposals for use cases of GPT-3 have prompted intense soul searching at OpenAI, resulting in more than a year of restricted availability and a long list of restrictions on permitted applications when the technology has finally been made available to a larger set of developers1. And being AI a new technology, its problems and limitations are sometimes difficult to understand and evaluate; this is illustrated by the frequent identification of training bias or vulnerabilities such as typographic attacks, which would be concerning if found in system that are mission-critical or used to make sensitive decisions.
So it is only natural that governments in several countries have placed the ability of AI systems to be trustworthy or reliable at the heart of governance models. This has been particularly relevant in Europe, USA, and China. Autonomy from intelligent systems, damage prevention, justice and explainability of algorithms are some of the pillars of the debate.
With its “White Paper on Artificial Intelligence” , the European Commission has developed a framework reflecting the spirit and contents of ethical guidelines. The direction appears to be based in a risk-based approach to the development of intelligent systems that focuses on the central role of human beings and respect for their dignity.
It is worth asking how it is possible to “translate” this ethical framework into investment choices aimed at the growth of the reference economic sector and the generation of benefits for the data-fuelled economy while respecting such fundamental principles. The directives that are gradually taking shape underline the need to protect privacy, algorithm transparency, workers’ rights, social and gender inclusion, system interoperability between AI systems and the importance of assessing AI technique reliability.
With reference to interventions of a public nature, the guidelines identify the best strategy to innovate in a responsible manner in the field of AI, considering the centrality of the human being and the role of literacy in this area. The recommendation is to substantially increase the funds dedicated to the implementation of this strategy and support these policies.
The European model is characterised both by the centrality of fundamental rights and by the possibility of regulatory interventions in the presence of con-crete risks for European citizens. Not surprisingly, the European Commission stressed that: “international cooperation on AI issues must be based on an ap-proach that promotes respect for fundamental rights, including human dignity, pluralism, inclusion, non-discrimination and the protection of privacy and personal data”  and that it will strive to export its values to the world.
With technological developments, this approach will force European institutions to constantly assess the risks of emerging AI technologies, even when it is necessary to use technological infrastructures located under foreign jurisdictions, and to decide on their use. It remains to be understood whether the institutional screening will be timely enough not to undermine the public trust necessary for industry and citizens to reap the opportunities offered by AI.
When compared with other governance models such as those developed by US and China, the European framework has clearly identifiable peculiarities. In the US model, the regulation of new technologies is traditionally entrusted to private forms of self-regulation. Government intervention is therefore mild and limited to the enforcement of existing regulations, especially those that protect competition. The US strategy outlined by  is no exception. It prioritises AI research and mentions reliability and technical security requirements . At the same time, it delimits the scope of regulatory interventions solely for the purpose of protecting civil liberties, privacy, security, and the economic interests of the country.
On the contrary, the Chinese model is focused on the exploitation, by the state, of the potential of AI . Although both an ethical framework and some privacy protection standards can be found, the non-binding nature of these precepts leaves room for legislative drifts that legitimise the prevalence of the public interest over the rights of the individual. An example of this complex intertwining is represented by the Social Credit System, the mechanism for attributing an “individual social score”, based on facial recognition technologies and automated data processing. The collaboration of the giant Alibaba in the realisation of the complementary Sesame project also highlights how the collaboration between public and private in the diffusion of AI systems is oriented towards the pursuit of the interests of the State.
The presence of at least three models – European Union, United States and China – however, highlights a plurality of approaches to the governance of AI and its development. The role that MPAI intends to play to give users fact-based indications of the Performance of AI systems has thus to be located at an appropriate level.
|<–Data for machines||Divide and conquer–>|