The goal of my research is to understand the computational and neurobiological mechanisms underlying effective learning and decision making. My approach combines computational modeling of human behavior with cutting edge cognitive neuroscience techniques, including high-resolution functional magnetic resonance imaging (fMRI), multivariate pattern information analyses, machine learning techniques, and functional connectivity methods. This computational cognitive neuroscience approach affords a unique inquiry into the learning brain, allowing for a principled investigation of the attention-knowledge interactions that promote successful learning. Click the links below for brief overviews of several lines in my research program.
|Speed of categorization||Linking models and brain||Memory-guided decision making||Tuning knowledge with attention|
How quickly we can make categorization decisions at different levels of abstraction has long been recognized as a window into the organization of conceptual knowledge. Seminal research demonstrated a behavioral advantage for classifying at the basic level; for example, categorizing a bird is faster than an Indigo Bunting. More recently, however, conflicting results have shown speeded categorization at a superordinate level of abstraction – we are fastest at identifying animals. To reconcile these findings, I used behavioral experimentation and computational modeling to investigate the time course of perceptual encoding (Mack et al., 2008; 2009; Mack & Palmeri, 2010b) and the organization of category knowledge (Mack et al., 2007; Mack & Palmeri, 2010a; Shen, Mack, & Palmeri, 2014) to show that although category knowledge is organized around basic-level categories (e.g., bird), coarsely encoded perceptual information may preferentially access higher levels of the knowledge hierarchy (e.g., animal). I proposed a novel framework (Mack & Palmeri, 2011; 2015) in which visual information is encoded into a perceptual representation that is used to query category knowledge. Coarse visual features encoded early access more abstract categories; but with more fine-grained visual information encoded over time, the dominantly represented basic-level categories are relatively more accessible. By grounding the temporal dynamics of categorization in a computational modeling framework, I showed that the disparate findings about the speed of categorization decisions are reconcilable under a new theoretical framework. My work also provided a testable mechanistic account of how visual perception interfaces with conceptual knowledge.
Formal computational models make testable predictions about the unobservable algorithms and representations that underlie mental activities. Similarly, functional neuroimaging reaches beyond behavior to characterize cognition at the level of neural mechanisms. Yet, a critical barrier in cognitive neuroscience is a lack of robust methods for linking models and brain. My recent work has focused on developing new approaches that combine computational modeling with advanced fMRI measures. In particular, I have developed a novel technique to select among competing models of category learning using multivariate pattern information analyses (Mack, Preston, & Love, 2013). The key idea underlying this approach is that if a formal cognitive model represents the true nature of learned knowledge, continuous measures of that model’s state during learning should be reflected in trial-by-trial brain states. Comparing the degree of consistency between brain and model states across a set of competing models enables one to pinpoint which theory is most consistent with both behavioral and neural responses. In applying this novel method, I discovered that multivariate brain activation patterns during category decisions consistently decoded signatures of concrete memory traces from previous experiences as predicted by exemplar theory, rather than the abstraction of those experiences as predicted by prototype theory. This work represents a significant advance in the growing field of computational cognitive neuroscience. My technique can isolate the algorithms that the brain uses to solve different tasks by looking for the implementation of those algorithms in patterns of brain activity. This research highlights the two-way street that is model-based fMRI; computational models can be leveraged to reveal the neural correlates of cognitive mechanisms, but neural measures can also adjudicate between competing cognitive models.
Most decision making research has focused on perceptual decisions that depend on information currently or recently available to the sensory system. However, decisions in our everyday life often rely on information that we recall from past experience. Indeed, a common thread across memory theories is that when making a decision, relevant memories are reinstated in the hippocampus to influence how we act. Despite this theoretical prominence, the neural computations that support how we use memories to guide decision making remain poorly understood. In a recent study (Mack & Preston, in press; SfN 2014), I combined fMRI pattern analyses with computational modeling to evaluate how internally generated mnemonic content influences decisions. After learning associative pairs (e.g. bananas and Taj Mahal), participants performed a delayed match to memory test in which they were shown a retrieval cue and then after a delay, a probe that either was (match) or was not (mismatch) paired with the cue during the preceding learning phase. Using neural pattern information analyses, I showed that neural activation in the hippocampus and perirhinal cortex reflected retrieval of items from memory (e.g., Taj Mahal when cued with bananas). Further, using functional connectivity methods, I found that the hippocampus was selectively coupled with face- and scene-selective visual regions during face and scene retrieval, respectively. I also linked hippocampal reinstatement signatures to behavioral outcomes using a computational drift diffusion model (DDM) of decision making. I found that the fidelity of item reinstatement predicted how quickly evidence accumulated in service of a decision; in other words, more neural evidence for hippocampal retrieval of Taj Mahal was related to faster accumulation of evidence toward an ultimate match decision. By measuring item-specific neural representations, my findings provide a novel fMRI demonstration that the hippocampus reinstates detailed memories. Additionally, by uniquely combining the DDM with a neural measure of memory reinstatement, I have shown not only that what we remember influences how we act, but also provide a testable mechanistic account of how memory guides decision making.
Attention plays an important role in shaping what we learn and remember by favoring relevant features of our environment while discounting irrelevant information. My previous work targeted the post-learning influence of selective attention, showing that neural representations in lateral PFC and lateral occipital cortex reflect attentional biasing (Mack et al., 2013). A critical remaining question is how these attention-weighted knowledge representations develop during learning. Further, how are knowledge structures updated when learning goals change? In a recent study (Mack, Love, & Preston, submitted; SfN 2015), I targeted the role of the hippocampus in learning with model-based predictions of knowledge formation. In doing so, I demonstrated that knowledge representations are dynamically shaped in memory through hippocampal interactions with frontoparietal selective attention mechanisms. These findings provide direct evidence that selective attention shapes the information that is stored in key memory structures during learning. Attention exerts a pervasive influence not only in what we gather from our immediate sensory experience, but also in what we remember and learn from those experiences.