Wellbeing Workshop · March 16, 2026 · 9 responses
Nine participants completed the post-workshop survey (some partially). This page reports the aggregate results, selected quotes, and our synthesis of what they imply for future workshops. Two respondents requested that their contributions be shared anonymously and are quoted without attribution.
This page is intended both as a transparency report for grantmakers and as a design input for the upcoming CM workshop and future workshops in the Pivotal Questions series. The survey form is also linked from CM and PBA workshop sites for reference.
Participants rated the workshop relative to other academic events they had attended. Higher = better. Ratings ranged from 32 to 93, with a mean of 71 and median of 75 — comfortably above the median for this group's prior experience.
"The material was much more rigorous and useful than other settings, and the conversation higher-quality, but that was slightly hamstrung in the end by a sort of failure to arrive at key decision-relevant considerations."
"I was familiar with some of the publicly available versions of the arguments for resolving / reconciling funders' frameworks; the value was the negative result of seeing them being tried and mostly failing. The rating is as much about my view of the decision value to major decision-makers of other workshops / conferences I've attended (which don't even try) as it is about this one."
"I loved having the focal example and the decision-focus it was linked to. That made clear what kinds of comments were helpful."
"This workshop felt really good and collaborative. It was really useful for surfacing what people are thinking about this issue and what's needed next. It's inspired us, in HLI, to do some work on wellbeing weights that we now see is really useful, but had not been a priority."
"I think it was too focused on technicalities of wellbeing measurement for the needs of my current role. It did help me understand what our real cruxes are, but I think it wasn't that focused on them."
"At this point in my career and life (with young children), I attend very few workshops/conferences and aim to attend only those of highest quality. Among that select group, this workshop ranks well. For me, it was valuable to hear the priorities of the effective altruism funders."
"It was a revelation to see that what we are working on is just as relevant for effective altruism judgments."
Participants rated sessions they attended on a 0–3 scale (0 = little/no value; 3 = high value). Averages below are computed only over respondents who rated that session. Blank = didn't attend or didn't rate.
n varies by session (range 6–8 raters). Lower sessions reflect the later timing and partial attendance, not necessarily quality.
"Best practices for adjusting due to scale use heterogeneity [became clearer] … Useful to know that Founders Pledge has this in their moral weights."
"There is a mismatch between where academic contention focuses on (scale-use heterogeneity, comparability) and what would shift funding allocation most substantively (linearity, neutral point) — the latter got less airtime seemingly because research was thinner."
"I hadn't appreciated how much effective altruism funders are focused on saving lives and how they view quality of life as a relatively minor consideration."
Participants selected areas where they felt they understood more after the workshop. The table shows how many respondents indicated improved understanding (1 = improved, 2 = strongly improved). Qualitative belief changes are shown below.
"I came in thinking 10 WELLBYs = 1 DALY but realized that was probably too low and so 4–7 is a better starting point. This was a good takeaway. Useful to know that Founders Pledge has this in their moral weights. Next is that DALY disability weights vs WELLBY weights are very different and WELLBY weights are plausibly better."
"I revised upwards my assessment of the importance of measuring stated preference marginal rates of substitution among different things people care about, even relative to the measurement of levels of various aspects of well-being … the main issue we should have focused on, in my view, is how, both philosophically and technically, to get conversion factors between disparate aspects of well-being, such as mental health and physical health."
"It made me much more pessimistic about whether effective altruism funders would be interested in the kind of work my collaborators and I are doing."
"It certainly made me develop my attitude towards WELLBY as a useable measurement, and I'll introduce the ideas discussed to the various people I know who are also involved in socioeconomic development research/practice/policy."
"[The workshop] inspired us, in HLI, to do some work on wellbeing weights that we now see is really useful, but had not been a priority. And we're now in discussion to collaborate with Dean Jamison and colleagues who are working on a Lancet report on updating the DALY to better capture non-fatal outcomes."
"I personally recalibrated downwards a bit on the value of workshops and dialogue vs new empirical work, but I'm cautious to over-update since my impression is the Unjournal plans to run more such workshops going forward."
Respondents indicated whether they wanted more or less of specific format elements: ++ (much more), + (more), − (less), −− (much less). Net score = sum of responses (++ = +2, + = +1, − = −1, −− = −2). Maximum possible score shown as reference.
"Something felt a bit off — I think structuring in a more sequential way with the aim to answering specific questions as a group might be better."
"What worked was the modular drop-in workshop structure, seeing the publicly available versions of the arguments for resolving / reconciling funder frameworks being tried and failed, and letting productive discussions overrun schedule. What didn't work, at least to my satisfaction, was discussion not focusing on what I thought would shift funding allocation most substantively (linearity, neutral point) and instead on topics where a lot of academic work had been done."
"Worked well to have a mix of practitioners and academics, focused on well-defined problems."
"Honestly, I think David slightly overengineered the documentation and structure around the workshop. Academic workshops all work effectively the same way for a reason. It would have been easier if there was just one single document that said, briefly: here's why we're here; here's who's going to speak, in what order, and what they're going to speak about."
"It felt a bit disorganized. I wasn't sure what was happening when. More structure is better. More enforced breaks. Hey David! … if there was more clarity on what would be in each session I might have dropped in for just a few, but it was hard to know which they would be in advance."
"I liked the ability to make comments both in the chat and on the documents. In retrospect, I realize I didn't know where was the best place to put in comments."
The beliefs elicitation session rated lowest of all sessions (avg 1.0/3). Several respondents offered specific suggestions for how to improve it.
"Pre-workshop belief elicitation might have mitigated the issue that the session was too rushed, and asking participants (especially funders) for their preferred action-informing refinements of the questions (which I thought were vague as stated) would have been better."
"I think you could have made this way simpler! … I don't love predicting what other people will do. People and organisations have agency to make their own choices, so I don't know how useful second-guessing is there."
"It was long and I didn't finish it. It felt hard to parse all the info."
"I think it's just an impractical format."
Rated 0–3 (not useful to very useful). Only respondents who used the tool rated it; many left tool ratings blank. Google Docs and Hypothes.is received the strongest endorsements. Breakout rooms and NotebookLM received uniform low scores.
n = 2–5 per tool. Several respondents did not interact with some tools.
"AI produced briefings were cool; I referred [to them] before, during, and after. I was a little worried that they might be wrong though and wasn't sure how much it was vetted…"
"Maybe I'm old school, but I don't think lots of extra tools is that helpful."
"It's a bit overwhelming to have links to so many novel tools."
Participants identified actionable next steps, open questions, and collaboration interests.
"Joel McGuire's interest in piloting calibration questions in charity M&E seems most concretely actionable."
Research priorities cited:
1. Whether the intervention itself changes scale use
2. More linearity evidence from preference-based methods in LMIC populations
"Good to connect with Matt from FP and see their interest in this. I don't think this is a priority for CG now but if we wanted to look further I know some of the people involved now (HLI, Benjamin et al group, Kaiser)."
"I'd love to explore our team (Benjamin, Cooper, Heffetz, Kimball) working with Effective Altruism organisations on researching the issues raised by this conference to help answer the practical questions Effective Altruism assessments face."
[HLI is] "now in discussion to collaborate with Dean Jamison and colleagues who are working on a Lancet report on updating the DALY to better capture non-fatal outcomes."
"I'd like grantmakers to more proactively supply new pivotal questions to the Unjournal, or refine existing ones, very roughly prioritised by what would most substantively shift funding allocation."
Selected responses to: "Is there anything else you'd like to tell a grantmaker about the value of this workshop or The Unjournal's work in general?"
"I loved getting down to brass tacks in a way anchored by the practical questions."
"I'd like grantmakers to more proactively supply new pivotal questions to the Unjournal, or refine existing ones, very roughly prioritised by what would most substantively shift funding allocation."
Aggregating the quantitative preferences and qualitative feedback, the following recommendations apply to the CM workshop (May 8, 2026) and subsequent workshops in the series.
About the survey instrument
The post-workshop survey form used for this workshop is at /survey.html. A similar form will be linked from the CM workshop site and the PBA workshop site after each event. Results from future workshops will appear on equivalent pages there.