AI: verso quale teoria dell’allineamento? (Accoto, 2024)

Nel mio saggio ultimo “Il Pianeta Latente”, ho posto in questione l’ingenua e condivisa soluzione del cd. “allineamento” sollecitando provocatoriamente ad una possibile e più feconda < filosofia del disallineamento >. Posta come risolutiva dell’antagonismo volgare umano-macchina, la questione dell’allineamento apre invece, a ben vedere, tutta una serie di complessità teoretiche, concettuali e filosofiche. Esistenziali, da ultimo. E anche economiche, direi, come ben mostra il saggio “Artificial Intelligence. Economic Perspectives and Models” (2024). E, dunque, di quale teoria dell’allineamento parliamo quando parliamo di allineamento? (Accoto, postilla a Il Pianeta Latente, 2024)

” … The challenge facing AI scientists is to create intelligent, autonomous agents that can make rational decisions. This challenge has confronted them with two questions (Oesterheld, 2021, p. 2): “What decision theory do we want an AI to follow?” and “How can we implement such a decision theory in an AI?” This chapter provides a critical overview of how the economic theory of decision-making has helped to answer these two questions and how it can benefit from the practical solutions that AI scientists are working on. The main contribution is to identify how economists can contribute to the AI “alignment problem,” and moreover provide a fresh perspective on the alignment problem. AI systems are said to be aligned when they do what they are supposed to do, and do no harm. They are said to be value aligned when they share human values. The alignment problem has so far largely attracted computer scientists, programmers, and philosophers. Economists have until now contributed little (Gans, 2018) … ” (Artificial Intelligence. Economic Perspectives and Models, p. 60)

Published by

Unknown's avatar

Cosimo Accoto

Research Affiliate at MIT | Author "Il Mondo Ex Machina" (Egea) | Philosopher-in-Residence | Business Innovation Advisor | www.cosimoaccoto.com