top of page
business-development-artificial-intelligence.jpg

The future of Trustworthy AI

Insights on Trustworthy AI from the AI4EU workshop "The Culture of Trustworthy AI. Public debate, education, practical learning"

The widespread use of AI technologies in many business sectors has raised questions about their ethicality. How to ensure transparency and privacy in the use of data? How to avoid bias, discrimination and social polarization, fake news, and political manipulation?

In the last few years, a large part of the discourse on Artificial Intelligence has pointed out the necessity of designing services aiming at guaranteeing the respect of ethical guidelines in the use of AI tools. The DIH4AI project addresses this need and follows the path traced by the AI DIH Network Project, proposing a broad set of services that AI DIHs could in the future offer to their networks to tackle legal and ethical issues connected with the development and adoption of AI. 

In the context of the project, experiments are being carried out. Among others, Fortiss in the Munich Region (Germany) is testing the PIANAI platform that enables an accountable-by-design development of AI solutions. The platform will soon be tested in a regional experiment, and then its integration into the TNO's AI Manufacturing Testbed in South Netherland will be experimented. Another experiment in the field involves the "ethical AI assessment", which will foresee the application of research outcomes from the ETAPAS project. The experiment was also discussed during the workshop “The Culture of Trustworthy AI. Public Debate, education practical learning”, held in Venice on the 2nd of September and organized by the AI4EU project. There, Sara Mancini, Responsible AI Lead at PwC New Ventures Italy, seconded at Intellera Consulting and responsible for the experimentations in the realm of “Trustworthy AI” within the DIH4AI Consortium, presented the ETAPAS Risk Framework for Disruptive Technologies adoption, a comprehensive framework which provides an overview of all ethical, social and legal risk a public sector organisation may face when adopting a disruptive technology such as AI, Robotics or Big Data. As Mancini showed, the ETAPAS project will move forward towards its research objectives and provide an indicators framework, accompanied by a prototypical platform and a governance model, aiming at providing to public sector organisations useful tools to assess ethical, social and legal compliance in the adoption of AI, Big Data and Robotics technologies, among others.

bottom of page