|POSITION TITLE/AFFILIATIONS||Postdoctoral fellow,
2008-2010, Division of Humanities and Social Sciences, California Institute of Technology
|EDUCATION/TRAINING||Ph.D., 2008, Experimental psychology, New York University|
|研究專長||Neural mechanisms and computations of decision-making|
I am interested in how humans and animals make decisions and the neural mechanisms that guide this process. We develop mathematical models that make quantitative predictions on choice behavior and brain activity related to valuation and choice. We study decision making in humans using functional magnetic resonance imaging (fMRI). Recently, we have also begun to investigate the decision making of non-human primates.
See below for ongoing research projects.
Decision under risk and uncertainty
We have been investigating how people make motor decisions in the presence of risk and uncertainty (Wu et al., 2006; Wu et al., 2009b), how humans trade off speed against accuracy in movements (Dean, Wu, Maloney, 2007), and how motor decisions under risk differ from economic decision under risk (Wu et al., 2009a, 2011). A brief summary of this line of work can be seen in Wu et al. (2015, Brain Mapping: An Encyclopedic Reference).
Recently, we also became interested in perceptual decision making and in particular, how humans are willing to gamble on his or her visual performance in order to receive rewards (Wu et al., 2015). The critical question we seek to address is whether risky choices involving metacognition – knowledge about ability or performance – are different from those that are not, such as in economic lottery tasks. At the neural level, we ask how the neural representations of reward value and reward probability might be different between these different tasks.
Integration of prior knowledge and current information in probabilistic inference
Uncertainty is a central feature in many decisions we face. To make good decisions under uncertainty, one needs to have access to information about the probabilities of occurrence associated with the events of interest. It is often the case that probability information is not explicitly provided to the decision maker and hence needs to be inferred or estimated.
This fundamentally important computation is referred to as probabilistic inference. Our goal is to understand, at the neural and algorithmic levels, how the brain performs probabilistic inference. In particular, we focus on reward probability and investigate the neural mechanisms for integrating prior knowledge and current information while human subjects are performing probabilistic inference on reward probability.
In a recent paper (Ting et al., 2015), we found that as the uncertainty is the current information increased, subjects increased relative weight to prior information and vice versa. Using fMRI, we found that the medial prefrontal cortex (mPFC) represents information about both prior and likelihood information, and that the neurometrically estimated relative weights in mPFC are statistically indistinguishable from the behavioral estimates. This indicates that mPFC not only represents different sources of reward-related information but is also involved in integrating them during probabilistic inference.
Reference-dependent value coding
Characterizing how values are coded in the brain is a fundamental problem in neuroeconomics. Empirically, both neurophysiological and neuroimaging studies have shown that values are coded in a relative fashion (Tremblay & Schultz, 1999; Elliot et al., 2008). At the theoretical and algorithmic levels, different models have been proposed to account for relative-value coding in the parietal cortex (Louie et al., 2011) and in the orbitofrontal cortex (Padoa-Schioppa, 2011).
My interest in value coding is on reference dependence and status-quo effects. Reference dependence has long been shown to be a key feature in valuation (Kahneman & Tversky, 1979). A recent theoretical reformulation of reference point has revived interests in the economics circle in testing what constitutes a reference point (Koszegi & Rabin, 2006, 2007).
My goal is to investigate, at the neural and algorithmic levels, how reference point is formed and how it affects value coding in the brain. Our current goal is to test two hypotheses – the status-quo-as-reference-point hypothesis and the expectation-as-reference-point hypothesis. We are currently piloting behavioral experiments in humans. In the near future, we plan to implement the same experiments in non-human primates.