The paper examines the resilience of judicial reasoning in the face of the algorithmic outsourcing of segments of the judge’s cognitive and evaluative activity. Starting from the observation that the functional lifecycle of the legal norm – from the construction of the regula iuris to its application to the facts – is built upon unavoidable processes of selection and rarefaction of reality, the essay explores the consequences of the further loss of information brought about by the interposition of algorithmic systems within decision-making domains, and specifically its impact on the nexus of “belonging” between the decision and the judge as a person. Within this framework, the constitutional underpinning of the duty to state reasons is reconstructed as an identity safeguard of the jurisdictional function. From the standpoint of European law, the study analyses the regulatory landscape governing automated decisions – from Reg. (EU) 2016/679 (GDPR) and Dir. (EU) 2016/680 to Reg. (EU) 2024/1689 (AI Act) – in light of the most recent rulings of the Court of Justice of the European Union and the Bundesverfassungsgericht, from which a configuration of the right to explanation emerges as a right to the traceability of human intervention throughout the decision-making chain. The analysis then turns to the notions of “meaningful human control” and “effective human oversight”, examining their articulations in the AI Act, Italian Law No. 132/2025 and the 2025 Recommendations of the CSM (Consiglio Superiore della Magistratura), while highlighting their risks of programmatic vagueness and operational unenforceability. On a propositional note, a standard for a “minimum level of reasoning” in “supported” decisions is put forward, grounded in the duty of disclosure regarding the use of algorithmic systems. The paper ultimately warns that the surreptitious outsourcing of decision-making domains may affect the validity of the judicial measure itself, inasmuch as its reasoning must be capable of standing on its own once the “supported” portions are excised.
Il contributo affronta il tema della tenuta della motivazione della decisione giudiziaria dinanzi ai fenomeni di esternalizzazione algoritmica di segmenti dell’attività cognitiva e valutativa del giudice. Muovendo dal rilievo che il ciclo funzionale della norma – dalla costruzione della regula iuris alla sua applicazione al fatto – si struttura su ineludibili processi di selezione e rarefazione della realtà, il contributo si interroga sulle conseguenze dell’ulteriore perdita di informazione che l’interposizione di sistemi algoritmici nei quadranti decisori produce sul nesso di appartenenza tra la decisione e il giudice-persona. Entro tale orizzonte, la cornice costituzionale dell’obbligo di motivazione viene ricostruita quale presidio identitario della funzione giurisdizionale. Sul versante del diritto europeo, il contributo analizza il quadro normativo in materia di decisioni automatizzate – dal Reg. (UE) 2016/679 (GDPR) e dalla Dir. (UE) 2016/680 al Reg. (UE) 2024/1689 (AI Act) – alla luce delle più recenti pronunce della Corte di Giustizia UE e del Bundesverfassungsgericht, dalle quali affiora la necessità di rileggere il right to explanation anche come diritto alla tracciabilità dell’intervento umano nella catena decisionale. L’analisi si sofferma quindi sulle nozioni di controllo umano significativo e supervisione umana effettiva, esaminandone le declinazioni nell’AI Act, nella L. 132/2025 e nelle Raccomandazioni del CSM del 2025, segnalandone i rischi di genericità e inazionabilità operativa. In chiave propositiva, viene articolato un canone di “minimo di motivazione” della decisione assistita, fondato sull’obbligo di disclosure dell’avvenuto ricorso a sistemi algoritmici, prospettando il rischio che la surrettizia esternalizzazione di distretti decisori possa incidere sulla validità del provvedimento, il cui tessuto motivazionale deve poter reggere autonomamente una volta espunte le porzioni “assistite”.
Di Chiara, G., Alongi, B.C. (2026). Decisioni assistite. Tessitura della motivazione e garanzie costituzionali nella Generative AI Era. LA LEGISLAZIONE PENALE, 1-39.
Decisioni assistite. Tessitura della motivazione e garanzie costituzionali nella Generative AI Era
Di Chiara, Giuseppe;Alongi, Bianca Claudia
2026-03-25
Abstract
The paper examines the resilience of judicial reasoning in the face of the algorithmic outsourcing of segments of the judge’s cognitive and evaluative activity. Starting from the observation that the functional lifecycle of the legal norm – from the construction of the regula iuris to its application to the facts – is built upon unavoidable processes of selection and rarefaction of reality, the essay explores the consequences of the further loss of information brought about by the interposition of algorithmic systems within decision-making domains, and specifically its impact on the nexus of “belonging” between the decision and the judge as a person. Within this framework, the constitutional underpinning of the duty to state reasons is reconstructed as an identity safeguard of the jurisdictional function. From the standpoint of European law, the study analyses the regulatory landscape governing automated decisions – from Reg. (EU) 2016/679 (GDPR) and Dir. (EU) 2016/680 to Reg. (EU) 2024/1689 (AI Act) – in light of the most recent rulings of the Court of Justice of the European Union and the Bundesverfassungsgericht, from which a configuration of the right to explanation emerges as a right to the traceability of human intervention throughout the decision-making chain. The analysis then turns to the notions of “meaningful human control” and “effective human oversight”, examining their articulations in the AI Act, Italian Law No. 132/2025 and the 2025 Recommendations of the CSM (Consiglio Superiore della Magistratura), while highlighting their risks of programmatic vagueness and operational unenforceability. On a propositional note, a standard for a “minimum level of reasoning” in “supported” decisions is put forward, grounded in the duty of disclosure regarding the use of algorithmic systems. The paper ultimately warns that the surreptitious outsourcing of decision-making domains may affect the validity of the judicial measure itself, inasmuch as its reasoning must be capable of standing on its own once the “supported” portions are excised.| File | Dimensione | Formato | |
|---|---|---|---|
|
DECISIONI-ASSISTITE.pdf
accesso aperto
Tipologia:
Versione Editoriale
Dimensione
482.77 kB
Formato
Adobe PDF
|
482.77 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


