The Unjournal · Pivotal Questions Initiative

Post-Workshop Survey Results

Wellbeing Workshop · March 16, 2026 · 9 responses

Nine participants completed the post-workshop survey (some partially). This page reports the aggregate results, selected quotes, and our synthesis of what they imply for future workshops. Two respondents requested that their contributions be shared anonymously and are quoted without attribution.

This page is intended both as a transparency report for grantmakers and as a design input for the upcoming CM workshop and future workshops in the Pivotal Questions series. The survey form is also linked from CM and PBA workshop sites for reference.

71
Mean percentile rating vs. other events attended
9
Survey responses (of ~27 participants)
75
Median percentile rating

1. Overall Workshop Value

Participants rated the workshop relative to other academic events they had attended. Higher = better. Ratings ranged from 32 to 93, with a mean of 71 and median of 75 — comfortably above the median for this group's prior experience.

Individual ratings (percentile vs. other events attended) — hover for details
0 — least valuable ever 50 — median 100 — best ever attended
Note on attribution: most respondents consented to full public sharing of their comments. Two requested anonymisation of written contributions; their quotes appear below without a name.

What drove the ratings

"The material was much more rigorous and useful than other settings, and the conversation higher-quality, but that was slightly hamstrung in the end by a sort of failure to arrive at key decision-relevant considerations."

Anonymous participant — 60th percentile

"I was familiar with some of the publicly available versions of the arguments for resolving / reconciling funders' frameworks; the value was the negative result of seeing them being tried and mostly failing. The rating is as much about my view of the decision value to major decision-makers of other workshops / conferences I've attended (which don't even try) as it is about this one."

Anonymous participant — 85th percentile

"I loved having the focal example and the decision-focus it was linked to. That made clear what kinds of comments were helpful."

Miles Kimball — 91st percentile

"This workshop felt really good and collaborative. It was really useful for surfacing what people are thinking about this issue and what's needed next. It's inspired us, in HLI, to do some work on wellbeing weights that we now see is really useful, but had not been a priority."

Michael Plant — 93rd percentile

"I think it was too focused on technicalities of wellbeing measurement for the needs of my current role. It did help me understand what our real cruxes are, but I think it wasn't that focused on them."

Peter Hickman — 32nd percentile

"At this point in my career and life (with young children), I attend very few workshops/conferences and aim to attend only those of highest quality. Among that select group, this workshop ranks well. For me, it was valuable to hear the priorities of the effective altruism funders."

Dan Benjamin — 60th percentile

"It was a revelation to see that what we are working on is just as relevant for effective altruism judgments."

Miles Kimball

2. Session Ratings

Participants rated sessions they attended on a 0–3 scale (0 = little/no value; 3 = high value). Averages below are computed only over respondents who rated that session. Blank = didn't attend or didn't rate.

Practitioner panel
2.3/3
Evaluator responses & discussion
2.1/3
Stakeholder problem statement
2.0/3
Benjamin et al. presentation
2.0/3
WELLBY reliability discussion
1.9/3
DALY/QALY–WELLBY conversion
1.7/3
Beliefs elicitation session
1.0/3

n varies by session (range 6–8 raters). Lower sessions reflect the later timing and partial attendance, not necessarily quality.

Session quotes

"Best practices for adjusting due to scale use heterogeneity [became clearer] … Useful to know that Founders Pledge has this in their moral weights."

Peter Hickman (on the WELLBY session)

"There is a mismatch between where academic contention focuses on (scale-use heterogeneity, comparability) and what would shift funding allocation most substantively (linearity, neutral point) — the latter got less airtime seemingly because research was thinner."

Anonymous participant (on the WELLBY session)

"I hadn't appreciated how much effective altruism funders are focused on saving lives and how they view quality of life as a relatively minor consideration."

Dan Benjamin (on the practitioner panel)

3. Belief Changes & New Understanding

Participants selected areas where they felt they understood more after the workshop. The table shows how many respondents indicated improved understanding (1 = improved, 2 = strongly improved). Qualitative belief changes are shown below.

7 / 9 Practitioner priorities & approaches
6 / 9 Academic research usefulness
5 / 9 Calibration methods (Benjamin et al.)
5 / 9 WELLBY reliability
4 / 9 WELLBY potential for prioritization
4 / 9 DALY-WELLBY conversion
3 / 9 Experimenter demand & response shift
3 / 9 Neutral point problem
2 / 9 Which measures funders should prioritize

Qualitative belief changes

"I came in thinking 10 WELLBYs = 1 DALY but realized that was probably too low and so 4–7 is a better starting point. This was a good takeaway. Useful to know that Founders Pledge has this in their moral weights. Next is that DALY disability weights vs WELLBY weights are very different and WELLBY weights are plausibly better."

Peter Hickman

"I revised upwards my assessment of the importance of measuring stated preference marginal rates of substitution among different things people care about, even relative to the measurement of levels of various aspects of well-being … the main issue we should have focused on, in my view, is how, both philosophically and technically, to get conversion factors between disparate aspects of well-being, such as mental health and physical health."

Miles Kimball

"It made me much more pessimistic about whether effective altruism funders would be interested in the kind of work my collaborators and I are doing."

Dan Benjamin

"It certainly made me develop my attitude towards WELLBY as a useable measurement, and I'll introduce the ideas discussed to the various people I know who are also involved in socioeconomic development research/practice/policy."

Anthony Rowett

"[The workshop] inspired us, in HLI, to do some work on wellbeing weights that we now see is really useful, but had not been a priority. And we're now in discussion to collaborate with Dean Jamison and colleagues who are working on a Lancet report on updating the DALY to better capture non-fatal outcomes."

Michael Plant

"I personally recalibrated downwards a bit on the value of workshops and dialogue vs new empirical work, but I'm cautious to over-update since my impression is the Unjournal plans to run more such workshops going forward."

Anonymous participant

4. Format Preferences

Respondents indicated whether they wanted more or less of specific format elements: ++ (much more), + (more), − (less), −− (much less). Net score = sum of responses (++ = +2, + = +1, − = −1, −− = −2). Maximum possible score shown as reference.

Less / avoid More / add
More structure — one issue at a time
+7
More applied intervention context
+9
Fewer topics, deeper dive
+3
Shorter / more condensed
+3
Follow-up series (shorter, regular)
−1
Clearer academic/practitioner divide
−2
More Zoom-native discussion tools
−1
Larger async component
−3
More technical discussion
−3
More pre-reading / preparation
−4
In-person rather than virtual
−4
Longer workshop (full day / multi-day)
−4
Breakouts into smaller groups
−8

Format quotes

"Something felt a bit off — I think structuring in a more sequential way with the aim to answering specific questions as a group might be better."

Anonymous participant

"What worked was the modular drop-in workshop structure, seeing the publicly available versions of the arguments for resolving / reconciling funder frameworks being tried and failed, and letting productive discussions overrun schedule. What didn't work, at least to my satisfaction, was discussion not focusing on what I thought would shift funding allocation most substantively (linearity, neutral point) and instead on topics where a lot of academic work had been done."

Anonymous participant

"Worked well to have a mix of practitioners and academics, focused on well-defined problems."

Participant (attended partial session)

"Honestly, I think David slightly overengineered the documentation and structure around the workshop. Academic workshops all work effectively the same way for a reason. It would have been easier if there was just one single document that said, briefly: here's why we're here; here's who's going to speak, in what order, and what they're going to speak about."

Michael Plant

"It felt a bit disorganized. I wasn't sure what was happening when. More structure is better. More enforced breaks. Hey David! … if there was more clarity on what would be in each session I might have dropped in for just a few, but it was hard to know which they would be in advance."

Peter Hickman

"I liked the ability to make comments both in the chat and on the documents. In retrospect, I realize I didn't know where was the best place to put in comments."

Miles Kimball

5. Beliefs Elicitation Feedback

The beliefs elicitation session rated lowest of all sessions (avg 1.0/3). Several respondents offered specific suggestions for how to improve it.

"Pre-workshop belief elicitation might have mitigated the issue that the session was too rushed, and asking participants (especially funders) for their preferred action-informing refinements of the questions (which I thought were vague as stated) would have been better."

Anonymous participant

"I think you could have made this way simpler! … I don't love predicting what other people will do. People and organisations have agency to make their own choices, so I don't know how useful second-guessing is there."

Michael Plant

"It was long and I didn't finish it. It felt hard to parse all the info."

Peter Hickman

"I think it's just an impractical format."

Anonymous participant

6. Tools & Resources

Rated 0–3 (not useful to very useful). Only respondents who used the tool rated it; many left tool ratings blank. Google Docs and Hypothes.is received the strongest endorsements. Breakout rooms and NotebookLM received uniform low scores.

Google Doc (collaborative notes)
2.25
Hypothes.is annotations
2.0
Zoom Chat
1.8
AI briefing reports
1.67
Beliefs elicitation website
1.33
Video/transcript (upcoming)
1.33
Zoom AI Companion
1.0
NotebookLM
0.33
Breakout rooms
0.0

n = 2–5 per tool. Several respondents did not interact with some tools.

"AI produced briefings were cool; I referred [to them] before, during, and after. I was a little worried that they might be wrong though and wasn't sure how much it was vetted…"

Peter Hickman

"Maybe I'm old school, but I don't think lots of extra tools is that helpful."

Michael Plant

"It's a bit overwhelming to have links to so many novel tools."

Dan Benjamin

7. Research Directions & Future Collaborations

Participants identified actionable next steps, open questions, and collaboration interests.

"Joel McGuire's interest in piloting calibration questions in charity M&E seems most concretely actionable."

Anonymous participant

Research priorities cited:
1. Whether the intervention itself changes scale use
2. More linearity evidence from preference-based methods in LMIC populations

Anonymous participant

"Good to connect with Matt from FP and see their interest in this. I don't think this is a priority for CG now but if we wanted to look further I know some of the people involved now (HLI, Benjamin et al group, Kaiser)."

Peter Hickman

"I'd love to explore our team (Benjamin, Cooper, Heffetz, Kimball) working with Effective Altruism organisations on researching the issues raised by this conference to help answer the practical questions Effective Altruism assessments face."

Miles Kimball

[HLI is] "now in discussion to collaborate with Dean Jamison and colleagues who are working on a Lancet report on updating the DALY to better capture non-fatal outcomes."

Michael Plant

"I'd like grantmakers to more proactively supply new pivotal questions to the Unjournal, or refine existing ones, very roughly prioritised by what would most substantively shift funding allocation."

Anonymous participant

8. Notes for Grantmakers

Selected responses to: "Is there anything else you'd like to tell a grantmaker about the value of this workshop or The Unjournal's work in general?"

"I loved getting down to brass tacks in a way anchored by the practical questions."

Miles Kimball

"I'd like grantmakers to more proactively supply new pivotal questions to the Unjournal, or refine existing ones, very roughly prioritised by what would most substantively shift funding allocation."

Anonymous participant

9. Synthesis: Implications for Future Workshops

Aggregating the quantitative preferences and qualitative feedback, the following recommendations apply to the CM workshop (May 8, 2026) and subsequent workshops in the series.

On the broader model: Several respondents noted a core tension: workshops and dialogue are valuable for connecting people and surfacing cruxes, but they are not a substitute for new empirical work. The highest-impact next step for this agenda — per multiple participants — is more linearity/neutral-point evidence from preference-based methods in LMIC contexts, not more workshops on existing evidence.

About the survey instrument

The post-workshop survey form used for this workshop is at /survey.html. A similar form will be linked from the CM workshop site and the PBA workshop site after each event. Results from future workshops will appear on equivalent pages there.