Effects of experiencing the COVID-19 pandemic on optimistically biased belief updating

  1. Paris Brain Institute, UMR 7225, U1127, Institut National de la Santé et de la Recherche Médicale/Centre National de la Recherche Scientifique/Sorbonne Universités, Hôpital Pitié- Salpêtrière, Paris, France
  2. Département de Psychiatrie Adulte, Hôpital Pitié-Salpêtrière, Assistance Hôpitaux Publiques (APHP), Paris, France

Peer review process

Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, public reviews, and a provisional response from the authors.

Read more about eLife’s peer review process.

Editors

  • Reviewing Editor
    Nils Kolling
    Stem-cell and Brain Institute (SBRI), U1208 Inserm, Bron Cedex, France
  • Senior Editor
    Ma-Li Wong
    State University of New York Upstate Medical University, Syracuse, United States of America

Reviewer #1 (Public review):

Summary:

This manuscript uses a well-validated behavioral estimation task to investigate the degree to which optimistic belief updating was attenuated during the 2020 global pandemic. Online participants recruited during and outside of the pandemic estimated how likely different negative life events were to happen to them in the future and were given statistics about these events happening. Belief updating (measured as the degree to which estimations changed after viewing the statistics) was less optimistically biased during the pandemic (compared to outside of it). This resulted from reduced updating from "good news" (better than expected information). Computational models were used to try to unpack how statistics were integrated and used to revise beliefs. Two families of models were compared - an RL set of models where "estimation errors" (analogous to prediction errors in classic RL models) predict belief change and a Bayesian set of models where an implied likelihood ratio was calculated (derived from participants estimations of their own risk and estimation of the base rate risk) and used to predict belief change. The authors found evidence that the former set of models accounted for updating better outside of the pandemic, but the latter accounted for updating during the pandemic. In addition, the RL model provides evidence that learning was asymmetrically positively biased outside of the pandemic but symmetric during it (as a result of reduced learning rates from good news estimation errors).

Strengths:

Understanding whether biases in learning are fixed modes of information processing or flexible and adapt in response to environmental shocks (like a global pandemic or economic recession) is an important area of research relevant to a wide range of fields, including cognitive psychology, behavioral economics, and computational psychiatry. The study uses a well-validated task, and the authors conduct a power analysis to show that the sample sizes are appropriate. Furthermore, the authors test that their results hold in both a between-group analysis (the focus of the main paper) and a within-group analysis (mainly in the supplemental).

The finding that optimistic biases are reduced in response to acute stress, perceived threat, and depression has been shown before using this task both in the lab (social stress manipulation), in the real world (firefighters on duty), and clinical groups (patients with depression). However, the work does extend these findings here in important ways:

(1) Examining the effect of a new real-world adverse event (the pandemic).
(2) The reduction in optimistic updating here arises due to reduced updating from positive information (previously, in the case of environmental threat, this reduction mainly arose from increased sensitivity to negative information).
(3) Leveraging new RL-inspired computational approaches, demonstrating that the bias - and its attenuation - can be captured using trial-by-trial computational modeling with separate learning rates for positive and negative estimation errors.

Weaknesses:

Some interpretation and analysis (the computational modeling in particular) could be improved.

On the interpretation side, while the pandemic was an adverse experience and stressful for many people (including myself), the absence of any measures of stress/threat levels limits the conclusions one can draw. Past work that has used this task to examine belief updating in response to adverse environmental events took physiological (e.g., SCR, cortisol) and/or self-report (questionnaires) measures of mood. In SI Table 1, the authors possibly had some questionnaire measures along these lines, but this might be for the participants tested during the pandemic.

On the analysis side, it was unclear what the motivation was for the different sets of models tested. Both families of models test asymmetric vs symmetric learning (which is the main question here) and have similar parameters (scaling and asymmetry parameters) to quantify these different aspects of the learning process. Conceptually, the different behavioral patterns one could expect from the two families of models needed to be clarified. Do the "winning" models produce the main behavioral patterns in Figure 1, and are they in some way uniquely able to do so, for instance? How would updating look different for an optimistic RL learner versus an optimistic Bayesian RL learner? Would the asymmetry parameter in the former be correlated with the asymmetry parameter in the latter? Moreover, crucially, would one be able to reliably distinguish the models from one another under the model estimation and selection criteria that the authors have used here (presenting robust model recovery could help to show this)?

Reviewer #2 (Public review):

The authors investigated how experiencing the COVID-19 pandemic affected optimism bias in updating beliefs about the future. They ran a between-subjects design testing for participants on cognitive tasks before, during, and after lifting the sanitary state of emergence during the pandemic. The authors show that optimism bias varied depending on the context in which it was tested. Namely, it disappeared during COVID-19 and re-emerged at the time of lift of sanitary emergency measures. Through advanced computational modeling, they are able to thoroughly characterize the nature of such alternations, pinpointing specific mechanisms underlying the lack of optimistic bias during the pandemic.

Strengths pertain to the comprehensive assessment of the results via computational modeling and from a theoretical point of view to the notion that environmental factors can affect cognition. However, the relatively small sample size for each group is a limitation. A major impediment interpreting of the findings is the need for additional measures. While the information on for example, risk perception or the need for social interaction was collected from participants during the pandemic, the fact that these could not be included in the analysis hinders the interpretation of findings, which is now generally based on data collected during the pandemic, for example, reporting increased stress. While authors suggest an interpretation in terms of uncertainty of real-life conditions it is currently difficult to know if that factor drove the effect. Many concurrent elements might have accounted for the findings. This limits understanding of the underlying mechanisms related to changes in optimism bias

Author response:

To reviewer #1:

We appreciate your advice on providing more conceptual motivations for comparing Bayesian and RL-like belief updating models. In short, both model families are complementary in capturing asymmetrical and symmetrical updating. They both consider that the magnitude of updating is weighed by two separate learning rates, one for positive and one for negative belief disconfirming evidence. If these two learning rates differ, updating is asymmetrical; if they are equal, updating is symmetrical.

However, the model families’ assumptions about the underlying updating process differ. In the RL-like belief updating model family, this process is assumed to be driven by comparing base rates and initial beliefs, also known as the prediction error (PE), weighed by the learning rates. On the contrary, the Bayesian updating model assumes that updating (i.e., the posterior belief) is driven by combining the base rate (i.e., the prior evidence) and how often the initial belief is represented in the estimated base rate (i.e., the likelihood ratio of all other alternative hypotheses, beliefs). Moreover, the two components of the posterior belief can differ in their respective contribution (i.e., precision or confidence), which might be more adaptive to external actual life conditions characterized by high uncertainty about the future.

For the revised manuscript, we will elaborate more on the conceptual and psychological meaning of these two proposed belief updating processes. So far, it is important to note that we do not have direct proof of humans reasoning in an RL-like or Bayesian way when updating their beliefs about the future. We, therefore, focus on the complementarity of both models to capture latent processes and variables in belief updating that can be leveraged to understand the sources of inter-individual differences and the impact of external contexts such as experiencing an actual adverse life event on human psychology.

To reviewer #2:

Thank you for recommending the exploration of potential differences between optimism biases in initial belief estimations (self versus other) during and outside the pandemic. We will also provide more details on the belief updating task and design.

To both reviewers:

We agree on the limitations arising from the lack of physiological and self-reported measures of stress. We collected some self-reports on risk perception, adoption of protective measures, need for social interactions, and mood, but solely in participants tested during the pandemic-related lockdowns (reported in the SI Table 1). For the revised manuscript, we propose exploring the correlational links between belief-updating biases and self-reports in this sample. The expected outcomes of such correlational analyses may identify the variables to target with interventions in future studies of human belief updating under real-world contexts. We also will add a relevant section to the discussion to elaborate on the limitation that hinders inferring plausible psychological causes of the differences observed in belief updating during and outside the pandemic.

Importantly, we will follow your recommendations to improve the computational modeling analyses. We will (1) add the confusion matrices from model recovery analyses to gain inferences on specificity, (2) provide evidence for the best-fitting model to reproduce the observed behavior shown in Figure 1, and (3) conduct model comparisons on the combined groups to justify the focus on the RL like updating model. In a few weeks, we plan to submit a revised manuscript alongside a point-by-point response to your concerns and recommendations.

  1. Howard Hughes Medical Institute
  2. Wellcome Trust
  3. Max-Planck-Gesellschaft
  4. Knut and Alice Wallenberg Foundation