Deepak Ravichandran
Deepak Ravichandran

Generative AI and Business Model Innovation in Banking

Generative AI and Business Model Innovation in Banking

27/08/2025

AI and machine learning in finance;Banking;FinTech

15:00

-

15:30

Most human decisions are taken intuitively, with a mix of reflection and emotions that is often impossible to disentangle. Economists model decisions explicitly as a mixture of objective and subjective elements: economic agents objectively (mathematically) optimize a subjective value function. Using this model, one can create situations where choice is independent of subjective value and demonstrate that humans often fail at the objective part of decision making. Algorithmic advisers can thus help humans, as they never fail at objective optimization. However, since decision optimality depends both on correct optimization and on knowledge of the right subjective value function, machines who disregard the taste or “preferences” of the human on whose behalf they act, will make poor decisions. Thus, the performance of algorithmic advisers is crucially affected by the machine’s ability to learn about a particular human’s preferences. But will a human do better at communicating their preferences to a machine than at making their decisions themselves? We know humans fail at common tasks of deciding what to consume or invest in, but will they be less faulty at the even less natural task of communicating their preferences? In the controlled environment of the economic laboratory – taken online via a platform to recruit a diverse set of participants (Prolific) – we induce a specific type of risk preferences and ask participants to create investment portfolios of a risky and a risk-free asset to maximize this preference, either directly or through a robotic adviser. To induce preferences, participant payoff is a fixed transformation of the probability distribution of risky asset payoffs, the payoff of the risk-free asset, and the participants’ chosen holdings of these two assets. Thus, participants do not face true risk: their payoff depends on the entire distribution of payoffs, not on realized payoff only. By controlling participant “risk preferences”, we can assess if the human-algorithm interaction leads to a correct treatment of the subjective part of decision making. In all experimental treatments, we vary the risk preferences we induce over time, so to see if participants react to and attempt to communicate these changes. We have one treatment where participants choose portfolios on their own and three treatments where participants are advised by algorithms who elicit their human boss’s risk preference via a test (lottery choice). We ask if portfolio choices with or without the algorithmic adviser are better for the preferences we induce. To refine our question, we vary the frequency at which the algorithm elicits risk preferences from humans. This gives us three treatments with an algorithmic adviser, depending on whether the frequency of elicitation is equal, higher, or lower than the frequency at which we change participants’ risk preferences. We ask whether frequent communication allows for better fine-tuning of communicated preferences or, instead, adds noise due to, for example, a biased perception of past algorithm outcomes by the human. The experiment, coded in oTree, will be preregistered on the platform AsPredicted and approved by the internal review board (IRB) of the University of Utah.

Most human decisions are taken intuitively, with a mix of reflection and emotions that is often impossible to disentangle. Economists model decisions explicitly as a mixture of objective and subjective elements: economic agents objectively (mathematically) optimize a subjective value function. Using this model, one can create situations where choice is independent of subjective value and demonstrate that humans often fail at the objective part of decision making. Algorithmic advisers can thus help humans, as they never fail at objective optimization. However, since decision optimality depends both on correct optimization and on knowledge of the right subjective value function, machines who disregard the taste or “preferences” of the human on whose behalf they act, will make poor decisions. Thus, the performance of algorithmic advisers is crucially affected by the machine’s ability to learn about a particular human’s preferences. But will a human do better at communicating their preferences to a machine than at making their decisions themselves? We know humans fail at common tasks of deciding what to consume or invest in, but will they be less faulty at the even less natural task of communicating their preferences? In the controlled environment of the economic laboratory – taken online via a platform to recruit a diverse set of participants (Prolific) – we induce a specific type of risk preferences and ask participants to create investment portfolios of a risky and a risk-free asset to maximize this preference, either directly or through a robotic adviser. To induce preferences, participant payoff is a fixed transformation of the probability distribution of risky asset payoffs, the payoff of the risk-free asset, and the participants’ chosen holdings of these two assets. Thus, participants do not face true risk: their payoff depends on the entire distribution of payoffs, not on realized payoff only. By controlling participant “risk preferences”, we can assess if the human-algorithm interaction leads to a correct treatment of the subjective part of decision making. In all experimental treatments, we vary the risk preferences we induce over time, so to see if participants react to and attempt to communicate these changes. We have one treatment where participants choose portfolios on their own and three treatments where participants are advised by algorithms who elicit their human boss’s risk preference via a test (lottery choice). We ask if portfolio choices with or without the algorithmic adviser are better for the preferences we induce. To refine our question, we vary the frequency at which the algorithm elicits risk preferences from humans. This gives us three treatments with an algorithmic adviser, depending on whether the frequency of elicitation is equal, higher, or lower than the frequency at which we change participants’ risk preferences. We ask whether frequent communication allows for better fine-tuning of communicated preferences or, instead, adds noise due to, for example, a biased perception of past algorithm outcomes by the human. The experiment, coded in oTree, will be preregistered on the platform AsPredicted and approved by the internal review board (IRB) of the University of Utah.

Antonio Gargano
Keynote: "Lessons from fintech-academic collaborations"
25-27 August 2025

25/08/2025

Antonio Gargano

Keynote
Antonio Gargano
Keynote: "Lessons from fintech-academic collaborations"
25-27 August 2025

25/08/2025

Antonio Gargano

Keynote
Emilia Bunea
Keynote: "Leadership for finance professionals: A CEO-turned-leadership-scholar perspective"
25-27 August 2025

25/08/2025

Emilia Bunea

Keynote
Emilia Bunea
Keynote: "Leadership for finance professionals: A CEO-turned-leadership-scholar perspective"
25-27 August 2025

25/08/2025

Emilia Bunea

Keynote
Allan Mendelowitz
Keynote: "The promise of digital finance: Greater transparency, enhanced efficiency, and more effective and less burdensome regulation"
25-27 August 2025

26/08/2025

Allan Mendelowitz

Keynote
Allan Mendelowitz
Keynote: "The promise of digital finance: Greater transparency, enhanced efficiency, and more effective and less burdensome regulation"
25-27 August 2025

26/08/2025

Allan Mendelowitz

Keynote
Albert Menkveld
Keynote: "What we can learn today about the markets of tomorrow: Crypto, crashes and credible research"
25-27 August 2025

27/08/2025

Albert Menkveld

Keynote
Albert Menkveld
Keynote: "What we can learn today about the markets of tomorrow: Crypto, crashes and credible research"
25-27 August 2025

27/08/2025

Albert Menkveld

Keynote