AMA: Ed Mathieu, Head of Data & Research at Our World in Data
By EdMathieu @ 2023-06-16T12:37 (+86)
Hi, EAs! I'm Ed Mathieu, manager of a team of data scientists and researchers at Our World in Data (OWID), an online publication founded by Max Roser and based out of the University of Oxford.
We aim to make the data and research on the world's largest problems accessible and understandable. You can learn more about our mission on our site.
You’re welcome to ask me anything! I’ll start answering questions on Friday, 23 June.
- Feel free to ask anything you may want to know about our mission, work, articles, charts, or more meta-aspects like our team structure, the history of OWID, etc.
- Please post your questions as comments on this post. The earlier you share your questions, the higher the chances they'll reach the top!
- Please upvote questions you'd most like answered.
- I'll answer questions on Friday, 23 June. Questions posted after that are less likely to get answers.
- (This is an “AMA” — you can explore others here.)
I joined OWID in 2020 and spent the first couple of years leading our work on the COVID-19 pandemic. Since then, my role has expanded to coordinating all the research & data work on our site.
I previously worked as a data scientist at the University of Oxford in the departments of Population Health and Primary Care Health Sciences; and as a data science consultant in the private sector.
For a (3.5-hour!) overview of my background, and the work of our team at OWID, you can listen to my interview with Fin Moorhouse and Luca Righetti on Hear This Idea. I also gave a talk at EA Global: London 22.
Clifford @ 2023-06-16T18:31 (+10)
What report / data set that OWID has produced do you think has been most impactful in retrospect?
Lizka @ 2023-06-20T12:54 (+6)
Also relatedly, do you have a guess for what pathway most of your impact flows through?
E.g. is it stuff like "voters are more informed, which means we get better policies"? Or something more like: "Policymakers can use OWID resources to make informed decisions?" Or not policy-related: "OWID resources inform people who want to start or contribute to impactful projects, improving their prioritization or problems and the quality of their work"?
EdMathieu @ 2023-06-23T10:17 (+5)
Thanks for the question!
It depends significantly on how we measure impact, which has always been tricky. As Lizka guessed below, there are multiple ways we can do this, as our impact can consist of influencing the general public (for some of our most viral pieces), "influencers" (journalists, book writers, or anyone with a significant social media presence), teachers, policymakers, etc. These can be very different paths to impact.
Some are pretty easy to measure (the general public can be roughly measured by raw pageviews). In contrast, others are much harder; influence on policymakers can be somewhat measured through mentions in things like government reports, but a lot of it happens behind closed doors (thankfully, we sometimes hear about this too, e.g., someone on our team getting a text message by a friend who works in government, saying our charts were shown in a critical meeting).
If we measure impact purely in terms of media mentions, paper citations, significant re-use, views of our charts, etc., nothing comes even close to our work on COVID-19. Both on our site, but also because it was the underlying data used by many national media on their site, the number of eyeballs on this data was quite crazy, and the rest of our content isn't within the same order of magnitude.
A second way to answer the question would be to examine which of our articles or charts keep popping up in books, learning materials, online conversations, etc. In that regard, I think that Hannah Ritchie's articles "You want to reduce the carbon footprint of your food? Focus on what you eat, not whether your food is local" and "What are the safest and cleanest sources of energy?" are probably the articles that have the highest cumulative impact over time.
If we zoom out, a third way of measuring impact is to ask which of our pieces seem to have shaped other people's worldviews. In that way, Max Roser's broader essays such as "The world is awful. The world is much better. The world can be much better." and "The short history of global living conditions and why it matters that we know it" are strong foundations of our content and a significant fraction of the people who read us have probably come across them.
But overall, it's hard to pinpoint precisely what has had the most impact. We have a long tail of 3,500 charts, so if one was ever shown to a head of state who made a different decision because of it, that could count as some of our highest direct impact ever – but we might not even be aware of that!
Sharang Phadke @ 2023-06-17T05:33 (+4)
Relatedly, how does OWID prioritize what to focus on next in a way that prioritizes impactful research?
OllieBase @ 2023-06-20T08:48 (+9)
[More a question for you, Ed, than OWID, and based on a bit of information I already know that I think other people might be interested in]
- Which other communities have you engaged with outside of EA?
- What lessons should take from those communities?
- What is better or worse about EA compared to those communities?
EdMathieu @ 2023-06-24T16:46 (+5)
Hey Ollie – thanks for the question!
I've engaged with a few activist and political communities in the past, primarily around environmental issues and Green politics. My overall take is that I would find it hard today to be part of these communities compared to the ones that interest me today. From what I remember, epistemic practices tended to be very bad, with lots of motivated reasoning, cherry-picking, various biases, etc. It doesn't necessarily mean the people I met were wrong, but how they made up their minds about issues seems very flawed in retrospect. Compared to this, the epistemic quality of Effective Altruism appears to be its main competitive advantage compared to other communities I encountered. Many people in the community are genuinely cause-neutral and truly adopt (or at least try to adopt) a scout mindset.
If anything seems better about these communities, it's the fact that their direct engagement with politics, the media, etc., makes them much more aware of the importance of public relations and not being perceived as bad actors. My perception – reinforced by everything that happened in EA in late 2022 – is that many EAs see public relations as unnecessary (sometimes even bad, when "PR" is used as a derogatory term). I've met quite a few people who seem to think that the way non-EA people perceive EA doesn't matter at all, as long as EA people are saying things that are evidence-based and smart. I believe this is deeply wrong; a community of smart and "very-right" people won't have much impact if it has such a bad image that no one dares involve it in public discussions.
Interestingly, in the case of EA, this dismissive attitude toward image sometimes applies to individuals as well. Both online and at EAG, I've met more people than I expected who seemed to disregard the benefits of social norms, politeness, kindness, etc., and who behaved in a way that seemed to say "I'm too smart to be slowed down by these stupid things". (To be clear, I don't think the majority of EAs are like this at all; but the prevalence of this behavior seems much higher than in the general population.)
Another thing that comes to mind, valued by people outside EA but shrugged off by people inside EA, is institutional stability. From having worked or collaborated with quite a few different companies, political parties, research organizations, NGOs, etc., I think there is genuine value in building institutions on solid foundations. For EA organizations, this relates to many questions people have raised since the FTX debacle: who should run EA organizations? What should their boards look like? What share of board members should be EAs? What share of board members can overlap between very close EA organizations? I think many EAs have shrugged off these questions as boring, but the long-term stability of the overall EA community depends on them.
Funding runway also falls under that category: many EAs reason about funding stability as if every skilled person was happy to work at an organization that could run out of money in less than a year. Again, I don't think this is a good way of planning things out for the long term. This recent post that described NTI as "too rich" for holding more than 1.5 years’ expenditure, is one example of this bad habit.
OllieBase @ 2023-06-26T12:04 (+2)
Thanks for responding!
vmasarik @ 2023-06-27T16:37 (+1)
Could you expand on your last point? As I am not sure I understood it properly.
many EAs reason about funding stability as if every skilled person was happy to work at an organization that could run out of money in less than a year
I would agree that having charities with long term funding and stability is great. At the same time I feel that if a charity is provably effective then it will keep existing even if it has less than a year of funding because they shouldn't have issues with asking for more funding.
Therefore, if you keep the funding under a year, the charities that work will continue working, those who are not as promising will dissolve. What would be the solution then? If you provide 3 years of funding to the effective charities, I assume nothing would change because those charities wouldn't have issues with getting the funding. If you give 3 years of funding to an inefficient charity, do they have just 3 years to waste, or do they return the money?
EdMathieu @ 2023-06-28T10:30 (+3)
I agree that things could work like this in theory, but I see two significant issues with how you describe it.
First, the process isn't as simple as "charities are created; the ones proven effective easily and regularly get money; the ineffective ones run out of money and disappear". That resembles the perfect competition model in economics: something handy to reason about the world, but that simplifies reality to the point of hiding many complexities. In reality, many ineffective charities survive for decades, while promising ones sometimes struggle to find the funding they need. These imperfections are one of the very reasons why effective altruism was first conceptualized.
Second, even if this ideal model was true, equally-skilled people still respond differently to risk. For example, in practice, there's a significant difference between being able to say to a potential hire:
- "Right now, we only have money to pay our staff for less than a year, but our charity is provably effective, so there's nothing to worry about."
- "We have 2-3 years of financial runway. Beyond that, we're confident we'll find more money, though we can't have 100% uncertainty."
It's a recurrent bias within EA to not see much difference between these two statements. EA people tend to be more tolerant to risk in their career decisions, and okay with making big bets that don't always pay out. They also tend to be relatively young and without kids.
But once an organization grows in size, impact, and ambition, it can't rely forever on risk-tolerant twenty-something EAs. It needs more experienced and senior people to join. And with more experience often come various financial commitments (e.g., mortgage, kids); that's where financial stability can make a big difference.
Vasco Grilo @ 2023-06-18T08:38 (+8)
Hi Ed,
Thanks for all your work, and doing this AMA! I liked your appearance on Hear This Idea.
Should Our World in Data discuss wild animal welfare in the context of nature conservation?
I really like Our World in Data (OWID), and often check/use its data. However, it seems to me that many articles from OWID implicitly argue that nature conservation is good. I think this may well be the case, but more nuance is needed, as it is unclear whether wild animals have good/bad lives (and the same arguably applies to non-animal beings like plants).
I believe wild animal welfare is an important area. I guess its scale is 50 M and 5 M times as large as that of humans and farmed animals.
Should Our World in Data discuss wild animal welfare in the context of nature conservation? For reference, there are no instances of animal “welfare” or “wellbeing” in the following OWID’s articles on biodiversity (there are more, but I did not check them):
- To protect the world’s wildlife we must improve crop yields – especially across Africa.
- Living Planet Index: what does an average decline of 69% really mean?.
- FAQs on the Living Planet Index.
- Wild mammals are making a comeback in Europe thanks to conservation efforts.
- Wild mammals have declined by 85% since the rise of humans, but there is a possible future where they flourish (only 1 instance of “human wellbeing”).
I also searched for “wild animal welfare” on OWID’s website, but only got 2 results for farmed “animal welfare”. Even if data about wild animal welfare is scarce, I think it would still be good to at least briefly mention it in some articles discussing wild life.
As I commented there, I think arguing for conservation can be good:
- Under large uncertainty, it is better to keep options open.
- Although one does not know whether wild animals have good/bad lives, wiping out nature is easier than building it.
- Advocatign against conservation would lead to some wiping out of nature, and make it difficult to increase the number of wild animals if they turn out to have good lives.
However, I think arguing for conservation on the basis that i) it is valuable to humans or that ii) the beings there are having a good time could also be dangerous. i) would make it harder to change nature for the sake of improving the lives of wild animals even if human lives are not improved, and ii) would make it difficult to wipe out nature (which I think might be good if wild animals turn out to have super bad lives).
In my mind, one should argue for conservation mostly on the grounds of option value, and discussing the importance of wild animal welfare, without making strong assumptions about whether the lives of wild animals are good/bad. It would be nice if OWID included something about these points in their articles.
EdMathieu @ 2023-06-23T10:37 (+5)
Thanks for the question, Vasco!
Animal welfare is an important topic that we want to cover better on OWID. The first step will be to publish more and better content on it. We plan to make significant steps toward this over the summer (stay tuned!).
However, this new content will likely focus on factory farming and related questions. I see the question of wild animal welfare as one on the edge of research, even by EA standards. In other words, more and more people are interested in it, but there's no consensus that it constitutes one of the world's largest problems. In many ways, from an outside (non-EA) perspective, it's not so different from longtermism or digital sentience: something most people have never even considered as an issue, but that could become one over the next few years or decades.
Because of that, I could imagine us one day writing about the general idea of wild animal welfare, the philosophical arguments behind it, why some researchers study it, and what the numbers are. This would allow us to introduce more people to it as an "interesting angle" to add to their worldview. This could look like Max Roser's article "The future is vast – what does this mean for our own life?".
Vasco Grilo @ 2023-06-23T16:10 (+6)
Thanks for the reply, Ed!
We plan to make significant steps toward this over the summer (stay tuned!).
Nice to know! OWID has been one of my go to sources on factory farming. Some suggestions (just in the unlikely case you have not considered them):
- Conditions of animals (e.g. fraction of animals of each species being factory-farmed, and fraction of factory-farmed animals which are in cages).
- Success of welfare reforms (e.g. cage-free campaigns).
- Opinion polls on factory-farming.
- More animal species (e.g. shrimp and crustaceans), namely the ones covered here.
- Data about time in various types of pain from the Welfare Footprint Project.
In other words, more and more people are interested in it [wild animal welfare], but there's no consensus that it constitutes one of the world's largest problems.
With "largest", are you referring to importance, or pressingness (i.e. importance, tractability and neglectedness)? I agree it is unclear whether it is super pressing (although I would say the same for global health and development, and farmed animal welfare), but think there is consensus within EA that the scale is huge.
Because of that, I could imagine us one day writing about the general idea of wild animal welfare, the philosophical arguments behind it, why some researchers study it, and what the numbers are. This would allow us to introduce more people to it as an "interesting angle" to add to their worldview. This could look like Max Roser's article "The future is vast – what does this mean for our own life?".
That would be great! Then you could potentially link to it in your articles about conservation to introduce some nuance.
Kei @ 2023-06-18T12:40 (+7)
How do you decide what data/research to prioritize?
EdMathieu @ 2023-06-23T21:13 (+3)
Thanks for the question, Kei!
When choosing the topics we would ideally cover on OWID, we aim to be quite broad in our approach. Our tagline is that we publish "research and data to make progress against the world’s largest problems" and voluntarily apply a broad definition of the "world's largest problems". We don't try to follow a specific framework or list of questions (compared to how 80,000 Hours defines the highest-priority problems).
But of course, even though we wish we could cover hundreds of important topics, we only have limited resources and must make choices regarding marginal prioritization. Our principles broadly follow EA's ITN framework, although with a slightly adapted version of each concept.
- Importance: is the topic a big problem for the world? Does it kill people, generate suffering (physical or mental), or cause societal instability? Or, on the positive side, does it unlock potential progress for the world, or preserve something valuable?
- Tractability: is there enough quality data on this topic for us to cover it? Given that OWID's mission consists of relying first and foremost on data to explain important issues, we need reliable, accurate, up-to-date data on a topic if we're going to cover it.
- Neglectedness: is the topic accurately covered by other media, publications, or institutions? Do we often spot confusion or misconceptions about it online? Is there good data on a topic ready to be used somewhere, but it's been ignored or misunderstood for lack of good visualizations and presentation?
In deciding how to prioritize our work, I'd say that importance and tractability are filters that make a topic "OWID material" or not. Neglectedness will typically lead us to prioritize something over the rest of our (very long) wishlist.
Sharang Phadke @ 2023-06-17T15:38 (+7)
What data infrastructure, broadly speaking, would make OWID's work much easier and help your team investigate interesting and new data categories? For instance, what data have you found really hard to get a hold of in the past? What important data categories are particularly important but poorly organized out in the wild?
EdMathieu @ 2023-06-24T17:12 (+2)
Better data publishing practices are probably the number 1 answer. My team spends heaps of time importing data that is hard to access and process, poorly documented, or contains obvious mistakes. This applies to virtually every type of data publisher, whether government, big international organizations, NGOs, companies, research teams…
Better data harmonization between governments would also be tremendously helpful. Across many topics, national agencies tend to record and analyze things differently, making the resulting figures hard to compare. Organizations like the UN, WHO, World Bank, and OECD, work hard to bridge the gap between national methodologies. Still, a world where governments would stop reinventing the wheel whenever they need to measure something would be great!
There are categories of data that are indeed still relatively inaccessible. One example is satellite data, which is "gatekept" by technological difficulty, and the existing commercial data is costly. High-quality open-domain satellite data would be an excellent opportunity to measure trends like land use, economic activity, pollution, etc.
Global energy data has also been in a strange situation for the last few years, with the data locked behind a paywall by the International Energy Agency. We've been campaigning publicly for this to change, and there have been encouraging signs from the IEA, but nothing concrete has happened yet.
James Özden @ 2023-06-24T16:16 (+6)
To what degree is the content on OWID decided by OWID vs influenced by donors?
For example, I vaguely remember seeing that Longview had donated to OWID then also noticed OWID’s newer work on longtermism. Was there any relation between these and generally how do you try to maintain editorial independence when soliciting donations from foundations/donors who have specific objectives?
EdMathieu @ 2023-06-25T11:47 (+1)
Hey James – great question, thanks!
100% of the content we publish is planned, decided, and created by our team, without direct input from funders or donors.
Generally, we work hard to convince funders to give us unrestricted grants. But some grants we receive are restricted, which means they are tied to a list of deliverables. When we've accepted restricted grants:
- They've only ever been tied to general, non-specific outputs such as "expanding our work on COVID-19", "producing a Global Health Explorer", "maintaining the content in our SDG Tracker", or "improving our content on democracy". This means funders never tell us how to produce this content, what the data should show, what insights users should learn, what they should think about an issue after reading it, etc.
- Funders never get to review or influence the deliverables at any point. Grant reports are typically sent once a year, in which we tell funders, "This year, we produced these things as part of the deliverables for this grant", and link to the content live on our site.
The Longview grant was an unrestricted grant allocated to OWID in 2020, which we used for product development across the site (see our 2020 annual report, page 9). Our article on longtermism was published around two years later, and was entirely disconnected from this donation.
(As a slightly pedantic point: in a very vague and indirect way, there's of course a link there: Longview sees OWID as a charity that cares about the long-term flourishing of humanity, and so they gave us money. And because OWID is a charity that cares about the long-term flourishing of humanity, we thought it'd be great to introduce our audience to longtermism. So these things are not entirely disconnected from a sociological point of view. But in terms of money, deliverables, and editorial freedom, we always make sure they're wholly disconnected.)
Lizka @ 2023-06-20T12:51 (+6)
What's your experience engaging with policymakers (in different places/areas)? Are they aware of your work? Are they excited to use it, or other quantitative/data-based approaches/reports? What value do they tend to get from it?
Relatedly, I've heard a bit about the ways some forecasting projects have been used by policymakers, and also some indications that things like legibility/credibility, question-writing, and writing/explanations are bottlenecks for forecasting being used more (and more usefully) in policy. Are there similar bottlenecks for data/evidence-based policy?
EdMathieu @ 2023-06-26T16:00 (+4)
Hi Lizka – thank you for your thoughtful question!
Our direct engagement with policymakers is somewhat limited, but we do have occasional opportunities to present our work to large international organizations like the UN and WHO. And we know from testimonies and occasional public reports that OWID is also considered very helpful by policymakers at the national level. We know that policymakers, or their aides, value the clarity and conciseness of our work. OWID's approach allows them to comprehend the broader picture quickly, which we believe is mainly due to what we now label as "key insights". This overview provides an immediate understanding of a topic without diving into specifics.
When a more detailed analysis is necessary, our platform allows policymakers to drill down into the data, explore specific time series, and interpret detailed data points. This functionality is beneficial when policymakers want to understand what the data implies, or perhaps bring charts to a meeting, without necessarily jumping to conclusions.
As for bottlenecks in evidence-based policy similar to those in forecasting, we've identified "technical text" as a significant challenge. By technical text, we mean all the information that needs to be presented alongside a chart to make sense, be accurately understood, and be placed into a broader context. This could mean explaining key terms, linking to in-depth articles, discussing the data source, the data's age, and its limitations, etc. We strongly believe that many of our charts could be misunderstood or even misleading without this accompanying text. It's in this space that we feel we bring added value, in contrast to chart-catalog websites like Statista or, to some extent, Wikipedia, which provide the raw data but often lack in-depth explanations.
So, while data is indeed powerful, it's the contextual, nuanced information that often determines the effectiveness of data-based approaches in policymaking.
Angelina Li @ 2023-06-20T22:21 (+4)
I remember people being excited about 'OWID for forecasting', especially for the far-future, last year.
Have you explored the idea of developing forecasting / estimation expertise internally, so that you are able to report on more speculative questions? (I might be wrong, my impression is that you don't do that much reporting on forecasts or more speculative estimates, except maybe for quite legible / lower CI stuff like this).
I think of Epoch AI as doing something like an 'OWID for AI forecasting' model, but would be excited to see more folks do this kind of data reporting in other domains!
EdMathieu @ 2023-06-26T16:16 (+6)
Within the OWID team, there's a mix of enthusiasm and skepticism about forecasting. Many of us see it as a promising tool for a more evidence-based understanding of the world, while others express reservations. Much of this skepticism stems from the fact that, often, forecasts lack clear justifications. While the raw forecast is presented, many sites and projects fail to thoroughly explain the reasoning behind these projections. To make forecasting more valuable and accessible, we believe this aspect needs significant improvement.
For now, we're not planning to start publishing forecasts ourselves. It's quite a significant and potentially risky step, not to mention it being quite outside our core expertise. It might even be considered off-brand: people primarily come to OWID for our ability to synthesize the state of knowledge around many issues, not necessarily for us to put forth our own speculative hypotheses about future events.
That said, we've recently collaborated with Metaculus and Good Judgment on projects aimed at forecasting OWID charts. These have been really fascinating projects and served as good first experiments for us in forecasting. We're open-minded about further incorporating forecasting in the future without straying too far from our mission and core competencies.
Angelina Li @ 2023-06-26T16:27 (+1)
Super interesting, thanks for explaining your reasoning, Ed! (Strong upvoted for your explanation)
+1, I'd be excited for more rigor and norms around reasoning transparency in forecasting as well.
Wow, thanks for linking to the Metaculus and Good Judgement collaborations. Super cool!
Angelina Li @ 2023-06-20T22:11 (+4)
I'm curious how engagement with OWID's longtermism and AI posts has been 1+ years out. Are there any impact stories that have come out of those, and in general how did your readers receive them?
EdMathieu @ 2023-06-23T13:23 (+4)
Thanks for the question, Angelina!
The article on longtermism and our content on AI were published in 2022. They've had great success (6-figure page views in both cases). I was particularly happy that we had no negative reaction to either topic, given that both could have seemed outside of our usual coverage for traditional OWID readers.
On longtermism, the reception was very positive. Max Roser's hourglass chart had a Wait-but-Why vibe that made it particularly popular on social media. My (unsubstantiated) impression is that many people remembered that part of the article more than the broader presentation of longtermism. But if we want existential risks to be taken more seriously, getting more people to adopt a broader perspective of humanity's past and future is probably an essential first step, so I'd say the article was very beneficial overall. Another nice aspect is that it was well-received in longtermist circles; no one seemed to think we had neglected or distorted any angle of the topic.
On AI, the impact has been more immediate. We published a new topic page, 5 articles, and 29 charts late last year. We were delighted that we could give a platform to the excellent data published by Epoch and that it was much more widely seen because of it (both on our site and in re-uses, e.g., in The Economist). Reactions to the 5 articles seemed very positive as well; "Technology over the long run" and "The brief history of artificial intelligence" were the most shared among them.
The most significant limitation is that this was all published just a few weeks before the ChatGPT/GPT-4 craze started. If anything, we're even more convinced now than at the time that AI is one of the world's largest problems, and we're working on an interim update of our content.
Lizka @ 2023-06-20T12:44 (+4)
What are some needs/niches that are close to Our World in Data that you think might need filling — i.e. what are things that might look like competition that you think are promising? E.g.:
- OWID but videos
- OWID-style books
- Something that looks more like a service or a tool
- ???
Do you have a sense for which of these, if any, are more promising? Have you explored options like these?
EdMathieu @ 2023-06-26T16:32 (+7)
Thanks, Lizka!
In an ideal world, we'd be all over these niches ourselves! We're grateful that our articles are well-received, but we know their format (heavy on text and charts) might deter some people.
Videos are a big one. Kurzgesagt and Vox are the ones that come closest in terms of style and quality. (In my dream world, each article we'd publish would have its own Kurzgesagt-style video.) This niche could comfortably accommodate several players if they're ready to meet the high production quality threshold needed.
Regarding books, I'm not so sure about a distinct "niche" as such. I think it comes down to the author's style. Some books (Factfulness is an obvious example, and my colleague Hannah Ritchie's upcoming book will likely fall into this category, too) are essentially doing this already.
For the "tool" niche, I think Tableau, Datawrapper, and the like have it reasonably well covered. There's room for more innovation, but the barrier to entry is higher if you want to offer something genuinely creative and competitive.
The biggest niche I can think of would be "news reporting in the style of OWID". I think there's a great need for this; something that combines OWID's editorial style, high standard of research, and reliance on evidence and data, but applied to current events (like the situation in Ukraine or the migrant crisis).
Vox's reporting style, or what the FT and Economist data teams do, comes somewhat close to this idea. FiveThirtyEight is another comparison, although their focus has primarily been on US politics and sports rather than global issues. (And unfortunately, given recent layoffs there, the future doesn't look too bright for them.) Still, I believe there's a niche for a more prominent, more focused player to emerge in this area.
Lizka @ 2023-06-20T12:59 (+3)
How do most people encounter or engage with OWID's work? Do you try to track this, and estimate how valuable different kinds of engagement are?
I've seen charts in lots of places — Twitter, news articles, blogs, etc. I think the first time I learned that OWID itself was a thing was when someone showed me an article (might have been this one: "The short history of global living conditions and why it matters that we know it") that I thought was really cool (probably around 2019). I had probably seen OWID charts before and hadn't realized it.
Lizka @ 2023-06-20T12:42 (+3)
What are some possible visions for Our World in Data over the next years? I'm interested in things like:
- What amazing success looks like (and maybe also what you expect will more-realistically happen)
- What are different ways OWID could develop (do you know if the organization will be trying to grow a lot? will it keep focusing on visual charts? Etc.)
Lizka @ 2023-06-20T12:35 (+3)
Do you know how much Our World in Data's work and resources are viewed as credible and unbiased? My sense is that it's got a very strong reputation, but I imagine there are still difficulties. If you track this or try to maintain/improve it, how do you do it?
Lizka @ 2023-06-20T12:33 (+3)
How much of your work is on-demand (i.e. someone gives you a grant or asks you to investigate some area)[1] vs. projects that you decide to focus on for some other reason? If you prefer one approach or the other, why is that?
In general, as Kei asks, I'm really curious to hear more about how you decide what to research, and also how you find donors and funders.
Lizka @ 2023-06-20T12:39 (+3)
I think I'm also curious about how you notice that something is a common misconception, or if that's something that you focus on at all. E.g. I think part of the success of this article, "The world is awful. The world is much better. The world can be much better.," is that it hits at a common blindspot/belief (that life for humans today is worse than it was in the past) and then shows evidence against it (same with "Global economic inequality: what matters most for your living conditions is not who you are, but where you are"). (Edited to add: I think a lot of your work on climate also follows this pattern.)
Angelina Li @ 2023-06-20T22:13 (+2)
I think of OWID has having a really great, trustworthy brand! Are there any lessons that you'd like to impart on others trying to occupy a similar niche in data journalism?
Nathan Young @ 2023-06-18T16:20 (+2)
How do you think about data in EA more generally? Is it easy for you to get data you want or would want? Do you think there are ways that a different culture around data could have healthier outcomes.
nalthaus @ 2023-06-19T14:47 (+1)
Why is there no longitudinal data of global life satisfaction and global happiness here: https://ourworldindata.org/happiness-and-life-satisfaction (there is lots of country-specific longitudinal data)?
Could you add a feature that let's the user combine several countries into one graph (so you could compare e.g. life satisfaction of Scandinavian countries over time with life satisfaction of South American countries over time)?
EdMathieu @ 2023-06-23T14:10 (+1)
Hi nalthaus, thanks for the question! Calculating a population-weighted global average for Self-reported life satisfaction is on our backlog of issues, so this will be tackled at some point! We'll most likely add continental averages as well.
Your second suggestion touches on a larger issue that we're often considering: how to give more freedom to users to (dis)aggregate data in a way that we don't want to pre-generate ourselves. "Life satisfaction across Scandinavian countries" is a great example of such a request. We have yet to come up with the right ideas (and resources to implement them!) to solve this problem, but it's on the long-term roadmap of our Product & Design team.
nalthaus @ 2023-07-18T14:54 (+1)
Great, thx for your reply!