The 2014 Survey of Effective Altruists: Results and Analysis
By Peter Wildeford @ 2015-03-17T00:29 (+30)
It's my great pleasure to announce that, after seven months of hard work and planning fallacy, the EA Survey is finally out.
It's a long document, however, so we've put it together in an external PDF.
Introduction
In May 2014, a team from .impact and Charity Science released a survey of the effective altruist community. The survey offers data to supplement and clarify those anecdotes, with the aim of better understanding the community and how to promote EA.
In addition it enabled a number of other valuable projects -- initial seeding of EA Profiles, the new EA Donation Registry and the Map of EAs. It also let us put many people in touch with local groups they didn’t know about, and establish presences in over 40 new cities and countries so far.
Summary of Important Findings
-
The survey was taken by 2,408 people, 1,146 (47.6%) of whom provided enough data to be considered, and 813 of whom considered themselves members of the EA movement (70.9%) and were included for the entire analysis.
-
The top three sources people in our sample first heard about EA from were LessWrong, friends, or Giving What We Can. LessWrong, GiveWell, and personal contact were cited as the top three reasons people continued to get more involved in EA. (Keep in mind that EAs in our sample might not mean all EAs overall… more on this later.)
-
66.9% of the EAs in our sample are from the United States, the United Kingdom, and Australia, but we have EAs in many countries. You can see the public location responses visualized on a map!
-
The Bay Area had the most EAs in our sample, followed by London and then Oxford. New York and Washington DC have surprisingly many EAs and may have flown under the radar.
-
The EAs in our sample in total donated over $5.23 million in 2013. The median donation size was $450 in 2013 donations.
-
238 EAs in our sample donated 1% of their income or more, and 84 EAs in our sample give 10% of their income. You can see the past and planned donations that people have chosen to made public on the EA Donation Registry.
-
The top three charities donated to by EAs in our sample were GiveWell's three picks for 2013 -- AMF, SCI, and GiveDirectly. MIRI was the fourth largest donation target, followed by unrestricted donations to GiveWell.
-
Poverty was the most popular cause among EAs in our sample, followed by metacharity and then rationality.
-
33.1% of EAs in our sample are either vegan or vegetarian.
-
34.1% of EAs in our sample who indicated a career indicated that they were aiming to earn to give.
The Full Document
You can read the rest at the linked PDF! -->
A Note on Methodology
One concern worth putting in the forefront is that we used a convenience sample, trying to sample as many EAs as we can in places we knew where to find them. But we didn't get everyone.
It’s easy to survey, say, all Americans in a reliable way, because we know where Americans live and we know how to send surveys to a random sample of them. Sure, there may be difficulties with subpopulations who are too busy or subpopulations who don’t have landlines (though surveys now call cell phones).
Contrast this with trying to survey effective altruists. It’s hard to know who is an EA without asking them first, but we can’t exactly send surveys to random people all across the world and hope for the best. Instead, we have to do our best to figure out where EAs can be found, and try to get the survey to them.
We did our best, but some groups may have been oversampled (more survey respondents, by percentage, from that group than are actually in the true population of all EAs) or undersampled (not enough people in our sample from that subpopulation to be truly representative). This is a limitation that we can’t fully resolve, though we’ll strive to improve next year. At the bottom of this analysis, we include a methodological appendix that has a detailed discussion of this limitation and why we think our survey results are still useful.
You can find much more than you’d ever want in the methodological appendix at the bottom of the PDF.
-
In sum, this is probably the most exhaustive study of the effective altruism movement in existence. It certainly exhausted us!
I'm really excited about the results and look forward to how they will be able to inform our movement.
undefined @ 2015-03-17T16:10 (+13)
Thank you for doing this survey and analysis. I regret that the feedback from me was primarily critical, and that this reply will follow in a similar vein. But I don’t believe the data from this survey is interpretable in most cases, and I think that the main value of this work is as a cautionary example.
A biased analogy
Suppose you wanted to survey the population of Christians at Oxford: maybe you wanted to know their demographics, the mix of denominations, their beliefs on ‘hot button’ bioethical topics, and things like that.
Suppose you did it by going around the local churches and asking the priests to spread the word to their congregants. The local catholic church is very excited, and the priest promises to mention at the end of his sermon; you can’t get through to the Anglican vicar, but the secretary promises she’ll mention it in the next newsletter; the evangelical pastor politely declines.
You get the results, and you find that Christians in Oxford are overwhelmingly catholic, that they are primarily White and Hispanic, and tend conservative on most bioethical issues, and are particularly opposed to abortion and many forms of contraception.
Surveys and Sampling
Of course, you shouldn’t think that, because this sort of survey is shot through with sampling bias. You’d expect Catholics are far more likely to respond to the survey than evangelicals, so instead of getting a balanced picture of the ‘Christians in Oxford’ population, you get a picture of a ‘primarily Catholics in Oxford with some others’ – and predictably the ethnicity data and the bioethical beliefs are skewed.
I hope EA is non-denominational (or failing that, ecumenical), but there is a substructure to the EA population – folks who hang around LessWrong tend to be different from those who hang around Giving What We Can, for example. Further they likely differ in ways the survey is interested in: their gender, their giving, what causes they support, and so on. To survey of ‘The Effective Altruism Movement’, the EAs who cluster in both need to be represented proportionately (ditto all the other subgroups).
The original plan (as I understand) was to obviate the sampling concerns by just sampling the entire population. This was highly over-confident (when has a voluntary survey captured 90%+ of a target population?) and the consequences of its failure to become a de facto ‘EA census’ significant. The blanket advertising of the survey was taken up by some sources more than others: LessWrong put in on their main page, whilst Giving What We Can didn’t email it around – for example. Analogous to the Catholics and the Pentecostals, you would anticipate LWers to be significantly over-sampled versus folks in GWWC (or, indeed, versus many other groups, as I’d guess LW’s ‘reach’ to its membership via its main page is much better than many other groups). Consequently results like the proportion of EAs who care about AI/x-risk, where most EAs live, or what got them involved in EA you would predict to be slanted towards what LWers care about, where LWers live (bay area), or how LWers got involved in EA (LW!).
If the subgroups didn’t differ, we could breathe a sigh of relief. Alas, not so: the subgroups identified by URL significantly differ across a variety of demographic information, and their absolute size (often 10-20%) makes the difference practically as well as statistically significant – I’d guess if you compared ‘where you heard about EA’ against URL, you’d see an even bigger difference. It may understate the case – if one moved from 3 groups (LW, EA FB, contacts) to 2 (LW, non-LW), one may see more differences, and the missing variable issues and smaller subgroup size mean the point estimates for (e.g.) what proportion of LWers care about X-risk is not that reliable.
Convenience sampling is always dicey, as unlike probabilistic sampling any error in parameter estimate due to bias will not expectedly diminish as you increase the sample size. However, the sampling strategy in this case is particularly undesirable as the likely bias runs pretty much parallel to the things you are interested in: you might hope that (for example) the population of the EA facebook might not be too slanted in terms of cause selection compared to the ‘real’ EA population – not a group like GWWC, LW, CFAR, etc.
What makes it particularly problematic is that it is very hard estimate the ‘size’ of this bias: I wouldn’t be surprised if this survey only oversampled LWers by 5-10%, but I wouldn’t be that surprised if it oversampled LWers by a factor of 3 either. The problem is that any ‘surprise’ I get from the survey mostly goes to adjusting my expectation of how biased it is. Suppose I think ‘EA’ is 50% male and I expect the survey to overestimate the %age male by 15%. Suppose the survey said EA was 90% male. I am going to be much more uncertain about the degree of over-representation than I am about what I think the ‘true EA male fraction’ is. So the update will be to something like 52% male and the survey overestimating by 28%. To the extent I am not an ideal epistemic agent, feeding me difficult to interpret data might make my estimates worse, not better.
To find fault is easy; to plan well, difficult
Science rewards caution and planning; many problems found in analysis could only have been fixed in design, and post-hoc cleaning of data is seldom feasible and still seldomer easy. Further planning could have made the results more interpretable. Survey design has a variety of jargon like “population definition”, “sampling frame”. More careful discussion of what the target population was and how they were going to be reached could have flagged the sampling bias worry sooner, likewise how likely a ‘saturation’ strategy was to succeed. As it was most of the discussion seemed to be focused on grabbing as many people as possible.
Similarly, ‘baking in’ the intended analysis plan with the survey itself would have helped to make sure the data could be analysed in the manner intended (my understanding – correct me if I’m wrong! – is that the planning of exactly what analysis would be done happened after the survey was in the wild). In view of the sampling worries, the analysis was planned to avoid giving aggregate measures sensitive to sampling bias, but instead explore relationships between groups via regression (e.g. what factors predict amount given to charity). However, my understanding is this pre-registered plan had to be abandoned as the data was not amenable. Losing the pre-registered plan for a new one which shares no common elements is regrettable (especially as the new results are very vulnerable to sampling bias), and a bit of a red flag.
On getting better data, and on using data better
Given the above, I think the survey offers extremely unreliable data. I’m not sure I agree with the authors it is ‘better than nothing’, or better than our intuitions - given most of us are imperfect cognizers, it might lead us more astray to the ‘true nature’ of the EA community. I am pretty confident it is not worth the collective time and energy it has taken: it probably took a couple of hundred hours of the EA community’s time to fill in the surveys, leave alone the significant work from the team in terms of design, analysis, etc.
Although some things could not have been helped, I think many things could have, and there were better approaches ex ante:
1) It is always hard to calibrate one’s lack of knowledge about something. But googling things like ‘survey design’, ‘sampling’, and similar are fruitful – if nothing else, they suggest that ‘doing a survey’ is not always straightforward and easy, and put one on guard for hidden pitfalls. This sort of screening should be particularly encouraged if one isn’t a domain expert: many things in medicine concord with common sense, but some things do not, likewise statistics and analysis, and no doubt likewise many other matters I know even less about.
2) Clever and sensible the EA community generally is, it may not always be sufficient to ask for feedback on a survey idea and then interpreting the lack of response as a tacit green light. Sometimes ‘We need expertise and will not start until we have engaged some’, although more cautious, is also more better. I’d anticipate this concern will grow in significance as EAs tackle things ‘further afield’ from their background and training.
3) You did get a relative domain expert raise the sampling concerns to you within a few hours of going live. Laudable though it was that you were responsive to this criticism and (for example) tracked URL data to get a better handle on sampling concerns, invited your critics to review prior drafts and analysis, and mention the methodological concerns prominently, it took a little too long to get there. There also seemed a fair about of over-confidence and defensiveness – not only from some members of the survey team, but from others who thought that, although they hadn’t considered X before and didn’t know a huge amount about X, that on the basis of summary reflection X wasn’t such a big deal. Calling a pause very early may have been feasible, and may have salvaged the survey from the problems above.
This all comes across as disheartening. I was disheartened too: effective altruism intends to put a strong emphasis on being quantitative, getting robust data, and so forth. Yet when we try to practice what we preach, our efforts leave much to be desired (this survey is not the only – or the worst – example). In the same way good outcomes are not guaranteed by good intentions, good information is not guaranteed by good will and hard work. In some ways we are trailblazers in looking hard at the first problem, but for the second we have the benefit of the bitter experience of the scientists and statisticians who have gone before us. Let us avoid recapitulating their mistakes.
undefined @ 2015-03-17T20:30 (+3)
Thanks for sharing such detailed thoughts on this Greg. It is so useful to have people with significant domain expertise in the community who take the time to carefully explain their concerns.
undefined @ 2015-03-18T01:06 (+11)
It's worth noting there was also significant domain expertise on the survey team.
undefined @ 2015-03-21T19:33 (+2)
Why isn't the survey at least useful count data? It allows me to considerably sharpen my lower bounds on things like total donations and the number of Less Wrong EAs.
I think count data is the much more useful kind to take away even ignoring sampling bias issues, because the data in the survey is over a year old, i.e. Even if it were a representative snapshot of EA in early 2014, that snapshot would be of limited use. Whereas most counts can safely be assumed to be going up.
undefined @ 2015-03-18T00:48 (+2)
Very thoughtful post.
Are there any types of analysis you think could be usefully performed on the data?
undefined @ 2015-03-22T01:55 (+1)
Once could compare between clusters (or, indeed, see where there are clusters), and these sorts of analyses would be more robust to sampling problems: even if LWers are oversampled compared to animal rights people, one can still see how they differ. Similar things like factor analysis, PCA etc. etc. could be useful to see whether certain things trend together, especially for when folks could pick multiple options.
Given that a regression-style analysis was abandoned, I assume actually performing this sort of work on the data is much easier said than done. If I ever get some spare time I might look at it myself, but I have quite a lot of other things to do...
undefined @ 2015-03-19T00:52 (+1)
What makes it particularly problematic is that it is very hard estimate the ‘size’ of this bias
One approach would be to identify a representative sample of the EA population and circulate among folks in that sample a short survey with a few questions randomly sampled from the original survey. By measuring response discrepancies between surveys (beyond what one would expect if both surveys were representative), one could estimate the size of the sampling bias in the original survey.
ETA: I now see that a proposal along these lines is discussed in the subsection 'Comparison of the EA Facebook Group to a Random Sample' of the Appendix. In a follow-up study, the authors of the survey randomly sampled members of the EA Facebook group and compared their responses to those of members of that group in the original survey. However, if one regards the EA Facebook group as a representative sample of the EA population (which seems reasonable to me), one could also compare the responses in the follow-up survey to all responses in the original survey. Although the authors of the survey don't make this comparison, it could be made easily using the data already collected (though given the small sample size, practically significant differences may not turn out to be statistically significant).
David_Moss @ 2015-03-19T09:27 (+5)
I think it's right to say that the survey was premised on the idea that there is no way to know the true nature of the EA population and no known-to-be-representative sampling frame. If there were such a sampling frame or a known-to-be-representative population, we'd definitely have used that. Beforehand, and a little less so now, I would have strongly expected the EA Facebook group to not be representative. For that reason I think randomly sampling the EA FB group is largely uninformative- and I think that this is now Greg's view too, though I could be wrong.
undefined @ 2015-03-17T16:28 (+11)
Thanks for running the survey, writing it up, and posting the data. I think this is chiefly valuable for giving people an approximate overview of what we know about the movement, so it's great to have the summary document which does that.
I would have preferred fewer attempts to look for statistical significance, as I'm not sure they ever helped much and think they have led you to at least one misleading conclusion. In particular:
Reading the “The Four Focus Areas of Effective Altruism”, one would expect a roughly even split between (1) poverty, (2) metacharity, (3) far future / xrisk / AI, and (4) nonhuman animals. Above, instead of equal splits, poverty emerges as a clear leader [Footnote: Statistically significant with a t-test, p < 0.0001]
On the contrary, I think the main message from the data is that in the sample collected, they are roughly evenly split. The biggest of the four beats the smallest by less than a factor of two -- this is a relatively small difference when there are no mechanisms I can see which should equalise their size (I would not have been shocked if you'd found an order of magnitude difference between some two of them).
Doing a test here for statistical significance is basically checking the hypothesis that survey participants were drawn from a distribution with exactly equal proportions supporting the four causes. But that's obviously nonsense -- we don't need a big survey to tell us that. It does tell us that poverty is the biggest (where we might not have been confident about which was), but statistical significance is misleading in terms of what that means -- the raw ratios are more informative.
Jacy @ 2015-03-17T19:25 (+2)
Thanks for the feedback. I agree that particular test/conclusion was unnecessary/misleading. I think we'll be more careful to avoid tests like that in future survey analyses :)
undefined @ 2015-03-17T22:50 (+3)
It's hard to say. Others have told me that they greatly preferred backing up these kinds of statements with statistical testing. I guess I can't make everyone happy. :)
undefined @ 2015-03-18T09:16 (+1)
OK, I guess I inferred the causality as being you did the test, then wrote the statement. If you were going to use the same language anyway, I agree that the test doesn't hurt -- but I think that this statement might have been better left out or weakened.
undefined @ 2015-03-19T00:29 (+1)
I agree with the spirit of this criticism, though it seems that the problem is not significance testing as such, but a failure to define the null hypothesis adequately.
undefined @ 2015-03-20T01:51 (+8)
Thank you to the survey team for completing what is an easy-to-underestimate volume of work. Thank you also to the many who completed this survey, helping us to both understand different EA communities better and to improve this process of learning about ourselves as a wider group in future years.
I have designed and analysed several consumer surveys professionally as part of my job as a strategy consultant.
There is already a discussion of sample bias so I will leave those issues alone in this post and focus on three simple suggestions to make the process easier and more reliable for when this valuable exercise is repeated next year.
Firstly, we should use commercial software to operate the survey rather than trying to build something ourselves. These are both less effort and more reliable. For example, SurveyMonkey could have done everything this survey does for about £300. I'm happy to pay that myself next year to avoid some of the data quality issues.
Secondly, we should use live data validation to improve data collection, data integrity and ease of analysis. SurveyMonkey or other tools can help John to fill in his age in the right box. It could refuse to believe the 7 year old, and suggest that they have another go at entering their age. It could also be valuable to do some respondent validation by asking people to answer a question with a given answer, removing any random clickers or poor quality respondents who are speeding through (eg "Please enter the number '2' in letters into the textbox to prove you are not a robot. For example, the number '1' in letters is 'one'")
Thirdly, we should do more testing by trying out draft versions with respondents who have not written the survey. It is very, very hard to estimate how people are going to read a particular question, or which options should be included in multiple choice questions. Within my firm, it is typical for an entire project team to run through a survey several times before sending it out to the public. Part of the value here is that most team members were not closely involved in writing the survey, and so won't necessarily be reading it in the way the author expected them to read it. I would suggest you want to try any version of the survey out with a large group (at least twenty) of different people who might answer it, to catch the interpretations of questions which different groups might have. Does the EA affiliation filter work as hoped for? Are there important charities which we should include in the prompt list? It does not seem unreasonable to pilot and redraft a few times with a diverse group of willing volunteers before releasing generally.
The analysis throws up several interesting conclusions, and I have learned a lot by reading through it. The main shocks are: the relatively low levels of donations in $ terms by many self-identified EAs, the relatively low proportion of EAs identifying chapters/local groups as a reason for joining or identifying with the community and, (for me) the encouragingly high proportion of respondents who are vegetarian or vegan.
I'm going to set aside some time in May to go through the data in a 'consulting' sort of way to see if that approach throws up anything interesting or different to others and will circulate with the survey team before publishing here.
David_Moss @ 2015-03-20T07:27 (+2)
Thanks Chris, all very useful info.
(On the 0 donors question: I've written about this elsewhere in the comments and a sizeable majority of these respondents were full time students or low income or had made significant past donations or had pledged at least (and often much more) of future income). Once all these people are taken account of, the number of 0 donors was pretty low. There was a similar (if not even stronger) trend for people donating <$500).
undefined @ 2015-03-20T07:18 (+1)
Thanks Chris, this is useful feedback and we'll go through it. For example, I think trying out draft versions would be valuable. I may ask you some more questions, e.g. about SurveyMonkey's features.
undefined @ 2015-03-21T19:37 (+1)
Happy to answer these any time, and happy to help out next year (ideally in low time commitment ways, given other constraints).
undefined @ 2015-03-17T10:16 (+5)
Thanks for this, and thanks for putting the full data on github. I'll have a sift through it tonight and see how far I get towards processing it all (perhaps I'll decide it's too messy and I'll just be grateful for the results in the report!).
I have one specific comment so far: on page 12 of the PDF you have rationality as the third-highest-ranking cause. This was surprisingly high to me. The table in imdata.csv has it as "Improving rationality or science", which is grouping together two very different things. (I am strongly in favour of improving science, such as with open data, a culture of sharing lab secrets and code, etc.; I'm pretty indifferent to CFAR-style rationality.)
undefined @ 2015-03-17T16:07 (+1)
Good point. Yes, this is my bad, I forgot that part. Definitely a mistake.
Definitely will break that apart next time.
undefined @ 2015-03-17T00:54 (+5)
"238 EAs in our sample donated 1% of their income or more, and 84 EAs in our sample give 10% of their income."
I was surprised by this. In particular, 22% (127/588) of people identifying as EAs do not donate. (Of course they may have good reasons for not donating, e.g. if they are employed by an EA charity or if they are currently investing in order to give more in the future). Do we know why so many people identify as EAs but do not presently donate?
undefined @ 2015-03-17T01:24 (+7)
Probably because the average age is so low (~25) - lots of students and people just starting out their careers.
undefined @ 2015-03-17T01:53 (+6)
Because self-identifying as EA is a lot easier than being self-sacrificing and donating. I saw the numbers with students removed and they did not improve as much as you would think.
David_Moss @ 2015-03-17T08:40 (+3)
The raw data seems to show that a lot of people who have donated zero have nevertheless pledged to donate a significant amount (e.g. everything above living expenses etc.).
undefined @ 2015-03-17T02:09 (+3)
The survey question only asked if people though they could be described 'however loosely' as an effective altruist. I suspect this question did not perform as intended - we know it included people who said they not heard of the term.
undefined @ 2015-03-17T20:01 (+1)
we know it included people who said they not heard of the term.
People will say anything on surveys. Many respondents go through clicking randomly. You can write a question that says, "Please answer C," and >10% of respondents will still click something other than C.
undefined @ 2015-03-17T20:41 (+2)
This year it might be worth including a mandatory questions saying something like "Check C to promise not to go through clicking randomly", both as a test and a reminder.
undefined @ 2015-03-20T01:09 (+2)
I regularly do this when designing consumer surveys as part of m professional work - the concern in those instances is that respondents are mainly completing the survey for a small monetary reward and so are incentivised to click through as fast as possible. To help my own survey development skills, I participate in several online panels and can confirm that whilst not exactly standard practice, a non-negligible proportion of online consumer surveys will include questions like this used to screen out respondents who are not paying attention.
This is less of a concern for the EA survey, but is almost costless to include such a screening question so seems like an easy way to help validate any resulting analysis or conclusions.
undefined @ 2015-03-23T08:19 (+4)
I've made a bar chart plotter thing with the survey data: link.
agent18 @ 2020-02-15T11:55 (+3)
PDF link doesn't exist anymore. @Peter_Hurford
Peter_Hurford @ 2020-02-15T17:18 (+3)
Thanks. We're working on relocating it and will fix the link when we do.
UPDATE: fixed
Peter_Hurford @ 2020-02-15T21:14 (+2)
The link is fixed now.
undefined @ 2015-03-17T10:41 (+3)
The first 17 entries in imdata.csv have some mixed-up columns, starting (at latest) from
Have you volunteered or worked for any of the following organisations? [Machine Intelligence Research Institute]
until (at least)
Over 2013, which charities did you donate to? [Against Malaria Foundation].
Some of this I can work out (volunteering at "6-10 friends" should obviously be in the friends column), but the blank cells under the AMF donations have me puzzled.
undefined @ 2015-03-17T16:12 (+1)
Yes, it looks like the first 17 entries are corrupted for some reason. I'll look into it.
Peter Wildeford @ 2022-06-04T03:36 (+2)
The previous link to the survey results died, so I edited to update the link.
undefined @ 2015-03-24T14:41 (+1)
Do you have any sense of the extent to which people who put down 'friend' as how they got into effective altruism might have learned about EA via their friend taking them to a meet-up group, or might have ended up getting properly committed because they made friends with people through a meet-up group? I was just thinking about how I would classify myself as having got into EA through friends, but that you might think it was more accurate to describe it as a meet-up group in Oxford getting me involved.
undefined @ 2015-03-24T15:50 (+1)
So, there were 2 questions, one speaking to how people "learned about EA" and one to how they "ended up getting properly committed":
"How did you first hear about 'Effective Altruism'?" (single response)
"Which factors were important in 'getting you into' Effective Altruism, or altering your actions in its direction?" (multiple response)
The second question covers the intersection you describe; I think you can see the overlap by selecting it as both the primary and secondary categories at http://pappubahry.com/misc/ea_survey/ . Beyond that, we don't have any data (except comments at the end of the survey). I don't know of course, but I'd guess that in your case talking to friends was important in getting you into it and attending local group events regularly maybe wasn't - even if those friends attended local group events.
undefined @ 2015-03-24T16:32 (+1)
Thanks, that's interesting. The reason I was thinking that it might be more accurately attributed to a local group was that it seems unlikely I would have really formed friendships with any of the people around if they hadn't been setting up Giving What We Can.
undefined @ 2015-03-17T09:45 (+1)
I'm trying to make sense of all the missing data. It seems very strange to have such a high non response rate (nearly 20%) to simple demographic questions such as gender and student status, and this suggests a problem with the data.
You say here that a 'survey response' was generated each time somebody opened the survey, even if they answered no questions. Does that mean there wasn't a 'complete and submit' step? Was every partially completed survey considered a separate 'person'? If so, was there any way to determine if individuals were opening multiple times?
If each (however complete) opening of the survey was considered an entry (ie whatever data was entered has been counted as a person), that would suggest that individuals making several attempts to complete the survey are being multiply counted. That would be supported if the non response rates are generally higher later in the survey, which I can't tell from this report.
If multiple attempts can't be excluded, these numbers are unlikely to be valid. Missing data is a difficult problem, but my first thought is that a safer approach would be to only include complete responses
undefined @ 2015-03-17T16:08 (+7)
When ACE and HRC talked to statisticians and survey researchers as part of developing our Survey Guidelines for animal charities beginning to evaluate their own work, they consistently said that demographic questions should go at the end of the survey because they have high non-response rates and some people don't proceed past questions they aren't answering. So while it's intuitively surprising that people don't answer these simple questions, it's not obviously different from what (at least some) experts would expect. I don't know, however, whether 20% is an especially high non-response rate even taking that into account.
undefined @ 2015-03-17T20:49 (+2)
That's interesting to know, thank you for sharing it! Looking at this study (comparing mail and web surveys) they cited non response rates to demographic items at 2-5%. However I don't know how similar the target population here is to the 'general population' in these behaviours. http://surveypractice.org/index.php/SurveyPractice/article/view/47/html
undefined @ 2015-03-17T16:35 (+2)
Yes, these questions were right at the end. You can see the order of the questions in the spreadsheet that Peter linked to - they correspond to the order of the columns.
undefined @ 2015-03-17T20:55 (+1)
Thanks Tom. I'm limited in my spreadsheet wrangling at the moment I'm afraid, but looking at non response rates that are cited in the document above, and comparing to the order or questions, non responses seem to be low (30-50) until the question on income and donation specifics, after which they are much higher (150-220). A question that requires financial specifics seems likely to require someone to stop and seek documents, so could well cause someone to abandon the survey at least temporarily. If somebody abandoned the survey at that point, would the information they had entered so far be submitted? Or would they have to get to the end and cluck submit for any of their data to be included?
undefined @ 2015-03-17T21:27 (+2)
That's a good point, could well have happened, and is something we should consider changing.
The questions were split into a few pages, and people's answers got saved when they clicked the 'Continue' button at the bottom of each page - so if they only submitted 2 pages, only those pages would be saved. We searched for retakes and saw a small number which we deleted.
undefined @ 2015-03-17T02:38 (+1)
Thanks for the survey! An interesting read. One question, two comments:
1 How do I read the graph on p10?
2
"Reading the “The Four Focus Areas of Effective Altruism”, one would expect a roughly even split between (1) poverty, (2) metacharity, (3) far future / xrisk / AI, and (4) nonhuman animals. Above, instead of equal splits, poverty emerges as a clear leader \footnote: Statistically significant with a ttest, p < 0.0001." Though, maybe this isn’t fair. If we redefine metacharity to also include rationality and cause prioritization, it takes the top slot (with 616 people advocating for at least one of the three). And if you take far future, xrisk, and AI as one cluster, it comes in third with 441 people advocating for at least one of the three. (Poverty, at 579, claims second place.)
It cannot be correct to say that poverty is clearly leading the four because in the next sentence you say that that metacharity was arguably more popular than it.
3
I don’t know of any public information on Giving What We Can members beyond the membership count and donation totals. They recently added a donation breakdown to their front page.
Thanks for all the work put into this, and it'll be good to see what thinking unfolds from it.