Concerns with Intentional Insights

By Jeff Kaufman 🔸 @ 2016-10-24T12:04 (+64)

A recent facebook post by Jeff Kaufman raised concerns about the behavior of Intentional Insights (InIn), an EA-aligned organization headed by Gleb Tsipursky. In discussion arising from this, a number of further concerns were raised.

This post summarizes the concerns found with InIn. It also notes some concerns which were mistaken and unfounded, and facts that arose which reflect well on InIn.

This post was contributed to by Jeff Kaufman, Gregory Lewis, Oliver Habryka, Carl Shulman, and Claire Zabel. They disclose relevant conflicts of interest below.

Outline

1 Exaggerated claims of affiliation or endorsement
1.1 Kerry Vaughan of CEA
1.2 Giving What We Can (GWWC)
1.3 Animal Charity Evaluators (ACE)
2 Astroturfing
2.1 The Intentional Insights blog
2.2 The Effective Altruism forum
2.3 LessWrong
2.4 Facebook
2.4.1 Soliciting upvotes and denying it
2.4.2 Not disclosing paid support
2.5 Amazon
3 Misleading figures
4 Dubious practices
4.1.1 Paid contractors' expected 'volunteering'
4.1.2 Further details regarding contractor 'volunteering'
4.2 Amazon bestseller
5 Inflated social media impact
5.1 Facebook
5.2 The Life You Can Save donations
5.3 Twitter
5.4 Pinterest
5.5 Presentations of media article traffic and reach
5.5.1 TIME article
5.5.2 Huffington Post
6 Mistaken/Unfair accusations
6.1 Supposed linearity of Twitter follower increase
6.2 Objections to Intensional Insights staff 'liking' Intentional Insights content
6.3 'Paid likes' from clickfarms
7 Positives
7.1 Jon Behar
7.2 Additional donations
7.3 Placement of articles in TIME and the Huffington Post
8 Policy responses from InIn
8.1 Post-criticism conflict-of-interest policy
8.2 Post-criticism Facebook boosting
9 Disclosures
10 Response comments from Gleb Tsipursky

1. Exaggerated claims of affiliation or endorsement

Intentional Insights claims 'active collaborations' with a number of Effective Altruist groups in its Theory of Change document which was on its "About" page (August 21, 2016).

In a number of cases InIn makes use of the name of an effective altruist organization without asking for that organization's consent, based on minor interactions such as the organization answering questions about web traffic. From the 'Effective Altruism impact of Intentional Insights' document (August 19, 2016):

As detailed below, we observe that after learning of such claims and use of their names, some of these groups had asked InIn to stop. Yet even in some of these cases InIn had not altered the mentions in its promotional materials months later. Tsipursky also does not appear to have adopted a practice of checking with organizations before using their names in InIn promotional materials.

1.1. Kerry Vaughan of CEA

Tsipursky previously posted notes from a Skype conversation with Kerry Vaughan without his consent, and suggested he had endorsed Intentional Insights where he had not:

Tsipursky later apologized, edited the post, and said he had updated. Yet he later engaged in similar behavior (see sections 1.2 and 1.3 below).

1.2. Giving What We Can (GWWC)

Gleb has taken the Giving What We Can pledge, and contributed an article on the Giving What We Can blog on December 23, 2015. He also mentioned and linked to GWWC in his articles elsewhere.

Michelle Hutchinson, Executive Director of Giving What We Can, wrote to Tsipursky in May 2016 asking him to cease "claiming to be supported by Giving What We Can." However, the use of Giving What We Can's name as an 'active collaboration' was not removed from Intentional Insights' website, and remained in both of the above InIn documents as of October 15, 2016.

1.3. Animal Charity Evaluators (ACE)

In the InIn impact document Tsipursky quotes Leah Edgerton of ACE:

Erika Alonso of ACE subsequently made the following statement:

2. Astroturfing

Astroturfing is giving the misleading impression of unaffiliated ("grassroots") support. In GiveWell's first year its cofounders engaged in astroturfing, and this was taken very seriously by its board. Among other responses, the GiveWell board demoted one of the co-founders and fined both $5,000 each. Tsipursky expressly claimed not to engage in astroturfing:

However, astroturfing is widespread across the Intentional Insights social media presence (documented in the sections below). Tsipursky did qualify his statement with "we are not asking people to do these sorts of activities in their paid time", but lack of payment isn't enough to prevent misleading people about the nature of the support. In any case, the distinction between contractors' paid and unpaid time is blurry (see section 4.1.1).

2.1. The Intentional Insights blog

Paid contractors for Intentional Insights leave complimentary remarks on the Intentional Insights blog, and the Intentional Insights account replies with gratitude, as if the comments were by strangers. At no stage do they disclose the financial relationship that exists between them. In the screenshot below (source, Candice, John, Beatrice, Jojo, and Shyam are all Intentional Insights contractors.

The most recent examples of this happened in late August 2016, after the initial post and discussion with Tsipursky on Jeff's Facebook wall, and during the drafting of this document.

2.2. The Effective Altruism forum

Tsipursky has done the same thing on the Effective Altruism forum. Here is one instance (note that "Nyor" also goes by "Jojo"):

Here is another example (note that "Anthonyemuobo" is a professional handle used by one of Tsipursky's acknowledged contractors, "Sargin"):

2.3. LessWrong

Tsipursky posted a link to some of his wife and InIn co-founder's writing in February 2016, without noting this connection:

This is a minor lapse, one which Gleb claimed to have learned from and updated. Yet similar behavior continued:

In March 2016, Intentional Insights' contractors created accounts and started posting non-specific praise on Tsipursky's LessWrong posts:

These are all people Tsipursky pays, but none of them acknowledged it in their comments or their posts in the welcome thread. Additionally, Tsipursky did not acknowledge this relationship when he thanked them for their remarks.

LessWrong user gjm pointed out that this was misleading, and Tsipursky acknowledged this was a problem and commented on Sargin's welcome post:

While Tsipursky knew both Beatrice Sargin and Alex Wenceslao had posted similar comments, since he had replied to them, he waited for these to be discovered and pointed out before acting:

This happened a third time, with JohnC2015:

2.4. Facebook

2.4.1. Soliciting upvotes and denying it

Tsipursky claimed "when I make a post on the EA Forum and LW I will let people who are involved with InIn know about it, for their consideration, and explicitly don't ask them to upvote":

In the comment Tsipursky denies soliciting upvotes, and demands that accusations that he did be substantiated or withdrawn. Six hours later someone responded with a screenshot of a post Tsipursky had made to the Intentional Insights Insiders group showing Tsipursky soliciting upvotes:

Tsipursky's response, a couple hours later in the same thread:

Tsipursky either genuinely believed posts like the above do not ask for upvotes, or he believed statements that are misleading on common-sense interpretation are acceptable providing they are arguably 'true' on some tendentious reading. Neither is reassuring. [He subsequently conceded this was 'less than fully forthcoming'.]

2.4.2. Not disclosing paid support

Intentional Insights proposed producing EA T-Shirts, and received multiple criticisms. Tsipursky claimed he had run the design by multiple people. Again, Tsipursky did not disclose that at least five of them were people he pays:

2.5. Amazon

Tsipursky's contractor posted a 5-star review for his self-help book on Amazon without disclosing the affiliation:

Tsipursky emailed copies of his self-help book to Intentional Insights volunteers, including contractors, who responded by posting 5-star reviews on Amazon:

He later followed up with:

This is true but incomplete: the 8th review is by Asraful Islam, a volunteer affiliated to Intentional Insights.

Another Intentional Insights affiliate, unpaid at that time but now a paid virtual assistant, Elle Acquino, posted another 5-star review, not in the top 10. In that review, however, the connection to Tsipursky and his nonprofit institute was disclosed.

3. Misleading figures

In December 2015 and January 2016, Tsipursky repeatedly claimed that his articles were shared thousands of times as evidence of the effectiveness of his approach. In fact, he had been reporting Facebook 'likes' and all views on Stumbleupon as shares, greatly exaggerating the extent of social media engagement.

The initial point reflected a common issue with the interpretation of social media activity counters on websites. After this was explained to him Tsipursky claimed to have updated on the correction. However, a June 2016 document on Intentional Insight's Effective Altruism Impact again reported views as shares, exaggerating sharing by many times.

4. Dubious practices

4.1.1. Paid contractors' expected 'volunteering'

Tsipursky only takes on contractors who spend at least two hours "volunteering" for Intentional Insights for each paid hour:

In a follow-up discussion, Tsipursky suggested that contractors could temporarily reduce their volunteer hours in special circumstances, but he would not affirm that contractors would be allowed to simply say no to "volunteering":

Depending on the nature of the volunteer work, this requirement seems potentially unethical, effectively requiring that contractors do three times as much work for a fixed amount of money. We also suggest this relationship undermines the distinction Tsipursky offers between 'paid' and 'volunteer time' and the defence that the promotion his contractors undertake on his behalf is innocuous as it occurs in their 'volunteer time'.

4.1.2. Further details regarding contractor 'volunteering'

Subsequent to the preparation of the above section Tsipursky provided additional information about how he came into contact with contractors, their donations, prior unpaid volunteering, wages, and other information as evidence of genuine support. They provide that, but also support concerns regarding linkage of paid and unpaid work and financial interests.

Tsipursky states the following regarding initial meetings and hiring:

Tsipursky stated the following regarding the length of unpaid volunteering prior to the first paid work:

He also notes donations by contractors, implemented by reducing their paid hours or paid hour wage rate, as evidence of genuine support:

I have pointed out many times that there is plenty of evidence showing that those folks who do contracting are passionate enthusiasts for InIn. Let's take the example of John Chavez, who the document brought up. He chose to respond to a fundraising email to our supporter listserve in June 2016 – long before Jeff Kaufman's original post – by donating $50 per month to InIn out of his $300 monthly salary:

This is bigger than a typical GWWC member, at over 15% of his annual income. Let me repeat – he voluntarily, out of his own volition in response to a fundraising that went out to all of our supporters, chose to make this donation. Just to be clear, we send out fundraising letters regularly, so it’s not like this was some special occasion. It was just that – as he said in the letter – it happened to fall on the 1-year anniversary of him joining InIn and he felt inspired and moved by the mission and work of the organization to give.

Before you go saying John is unique, here is another screenshot of a donation from another contractor who in October 2015, in response to a fundraising email, made a $10/month donation:

Again, voluntarily, out of her own volition, she chose to make this donation.

Tsipursky also indicates that paid and unpaid hours by contractors constitute only a minority of work hours at InIn, with most hours contributed by volunteers without financial compensation:




Regarding wages and requirement/expectations of unpaid volunteering, Tsipursky wrote the following:

The Upwork (formerly known as Odesk) freelancer marketplace on which contractors are hired has a minimum wage of $3.00 per hour. Combined with the expected unpaid volunteering the typical wage would be $1.00, 1/3rd of the minimum for the platform.

John is given as an example of a higher paid contractor at $7.5 per hour. However, this is combined with 3 hours of unpaid volunteering for each paid hour, rather than 2, for a combined wage of $1.875 per hour, prior to his donation of 1/3 of that wage.

In effect, the expectation of volunteering systematically circumvents the Upwork minimum wage for contractors. However, it should be noted that the Upwork minimum wage is a corporate policy, and not a national or local labor law. Contractors in low-income countries may be earning substantially more than the local minimum wages or average incomes. For example, according to Wikipedia the hourly minimum wage in US dollars at nominal exchange rates is $0.54 in Nigeria. In the Philippines minimum wages vary by location and sector, but Wikipedia lists a range of ~$0.6-$1.2 per hour for non-agricultural workers, with the latter group in the capital of Manila. So the wage per combined (paid+volunteer) hour of work would not appear to be in conflict with legal minimum wages in contractors' jurisdictions. Furthermore in a number of these jurisdictions the minimum wage is closer to the median wage, and unemployment is high.

Regarding the link between paid and unpaid hours, Tsipursky describes it as an informal understanding:

In aggregate the additional statements provide evidence of pre-existing support for InIn from new contractors. However, they also confirm a linkage of paid and unpaid labor, and contractor financial interests in promotional activity occurring during 'volunteer' hours.

4.2. "Best-selling author"

Tsipursky includes being a 'best-selling author' in his standard bio. For example, on his Patreon:

And:

And on his Amazon author page:

Normally, a reader would take "best-selling author" to mean hitting a major best-seller list like the New York Times, which indicates that very many people have decided to buy the book, and is a hard signal to fake. In Tsipursky's case, "best-selling author" means that his book was very briefly the top seller in a sub-sub-category of Amazon. Further, he reports offering his book for free and encouraging friends and contractors to download and review it. In its first two weeks the book sold 50 copies at $3 each. Cumulatively it has sold 500 copies at $3 each, and been downloaded 3500+ times free. In contrast, NYT bestseller status requires thousands of sales over the first week. Amazon bestseller status is calculated hourly by category: in small categories three purchases in an hour can win the #1 bestselling author label.

Many of those giving the book 5 star reviews are social contacts of Tsipursky, some of them paid or volunteer Intentional Insights staff, but do not disclose this association (see section 2.5).

As of August 22, 2016 the book is ranked as follows:

In light of this, calling oneself a 'bestselling author' on this sort of performance is potentially misleading.

We note that the practice of claiming bestselling author status using bestseller lists that involve very small actual sales may be widespread. This does not, however, prevent it from being misleading or controversial. For example, when Brent Underwood attained Amazon best-seller status using a few dollars in less than an hour with a book that was simply a picture of his foot, media coverage generally suggested that this highlighted a problematic practice.

5. Inflated social media impact

5.1. Facebook

Tsipursky has cited social media engagement as evidence of impact. However, in many cases it appears that this engagement is illusory. In the case of Facebook, it appears to have resulted from paid Facebook post boosting, which led to hundreds of likes on posts from clickfarms, in a process described by Veritasium: clickfarm accounts like enormous numbers of things they have not been directly paid to like in order to manipulate Facebook’s algorithms. Facebook boosting systematically attracts these clickfarm accounts, a risk which is exacerbated by boosting to regions where clickfarms are located (although clickfarms also have fake accounts purporting to be from all around the world).

In the case of InIn posts, InIn paid for that boosting. In February 2016, Tsipursky argued that this was resulting in genuine engagement and reach:

For a number of InIn blog posts with large numbers of likes (for example 318 for this recent one) these likes appear to be primarily the result of clickfarms. Accounts liking this post like vast numbers of disparate things. Here are some random selections from the middle of the list of that post:

There is further circumstantial evidence: the likes are often from accounts in low-income countries with substantial clickfarm operations. Tsipursky defended this as coincidental overlap caused by Intentional Insight's targeting of low-income countries, however countries with similar demographics without large click farm operations are not well represented.

In arguing for the impact of his writing, Tsipursky cited a post on the TLYCS blog that got 500 likes in its first day on the TLYCS blog while typical posts got 100-200 likes:

However, this appears to also be a case of Facebook ad boosting eliciting engagement from clickfarms, this time by a former TLYCS employee (subsequently asked to stop by TLYCS) rather than InIn, according to this statement from TLYCS' Jon Behar:

The profiles contributing the likes and whose profiles show no other engagement with TLYCS, or with EA ideas:

After Jeff Kaufman raised concerns about the pattern of Facebook likes in February 2016, Tsipursky doesn't seem to have looked into the issue further prior to the August 2016 discussion, when outside observers provided indisputable evidence and explained the role of boosting in generating clickfarm likes. While the boosting-clickfarm link is counterintuitive, the lack of any other engagement by the clickfarmers was apparent both before and after the raising of concerns in February. Failure to examine the ineffectiveness of these social media channels, even after concerns were raised, raises questions about InIn's practices as an outreach and content marketing organization.

5.2. The Life You Can Save donations

In his "Effective Altruism impact of Intentional Insights" document (archived copy), Tsipursky claims that content he has published with The Life You Can Save is able to "regularly reach an audience of over 5,000, at least 12% of whom make a donation" suggesting over 600 donations per article, based on a reference letter from a former TLYCS employee. However, these figures were incorrect, and TLYCS estimates that the total number of visitors who landed on Tsipursky's blog posts at the TLYCS blog was ~3,000 (rather than tens of thousands), with donations directly from those page totalling likely 2-3 (rather than hundreds).

While the reference letter Tsipurksy cites could easily give that false impression, it is implausible in light of other information available to him about the impact of his pieces. For example, Tsipursky also cites an article in a major news outlet as producing two donations to GiveDirectly totalling $500:

Since two donations is far less than ~600, this "12% of 5,000 views" number was clearly not sanity checked before being used to argue the case for Intentional Insights to EAs and in a fundraising document aimed at EAs. It's possible that Tsipursky simply took a surprisingly good estimate from a partner organization at face value, but one might expect an expert in marketing to investigate why this channel was performing so much better than his other channels.

5.3. Twitter

Tsipursky implied that his 10k Twitter followers represent organic interest:

The InIn account is following approximately as many accounts as follow it, 11.7k to 11.4k. Oliver observed that many of these accounts have "100% follow-back" in their descriptions. It seems like they're offering an exchange: InIn adds these accounts as followers, and they add his back in return, or vice versa. This is not an indication of actual interest from fans, and these accounts have almost no organic engagement with InIn such as retweets:

5.4. Pinterest

InIn follows over 20,000 people on Pinterest, far more people than follow it. As on the InIn Facebook page and Twitter, follower engagement is extremely low, and dominated by persons affiliated with InIn, suggesting the vast majority of followers are not genuine.

Examining the profiles of followers, there appears to be a very high rate of clickfarm/advertising accounts. Here are 10 randomly selected InIn Pinterest follower accounts. 10 out of 10 appear to be spam/advertising/clickfarm accounts:

5.5. Presentations of media article traffic and reach

5.5.1. TIME article

In the InIn EA impact document we see this:

The document does not make clear that the article did not appear in the print magazine, so print readers would not be exposed to it there. Online, we are left to anchor on a figure of 65 million views, without any reference to the actual views of the article (which were tremendously lower).

Somewhat later in the document we see this:

As another example, here are numbers in a spreadsheet we set up recently to track clicks to EA nonprofit websites from the Time piece we published.

However, while the article made the case for GiveWell recommended charities and EA charity evaluators, only 132 clicks reached those organizations through the article, 70 of which did not immediately bounce, according to InIn's traffic figures. Specifically, in the original InIn spreadsheet the 'signed up to newsletter or converted in other ways'' column had a value of 13 for ACE, and 1 'clicked on donate button'

The corrected spreadsheet shows a value of 2 rather than 13 for 'signed up to newsletter.'

Thus InIn knew that the product of traffic and click-through was very low, suggesting some combination of low traffic for a piece on Time's website and low click-through rates. However this negative information was removed from the main text of the document while the 65 million figure (for all articles on the TIME website, including dubious traffic) was made prominent.

5.5.2. Huffington Post

The InIn EA impact document also included this discussion of a Huffington Post article:

However, he provided no evidence of reaching new audiences via the placement in the HuffIngton Post. Instead, he provided an example of an already supportive facebook friend, who apparently encountered the article from Tsipursky's Facebook page, not the Huffington Post.

6. Mistaken/Unfair accusations

6.1. Supposed linearity of Twitter follower increase

It was suggested that Tsipursky's twitter page shows surprisingly linear increases in followers over time (e.g. +8 followers a day for 10 days in a row, which may be indicative of click-farming. This piece of evidence is likely mistaken, as the tool used (sharecounter) probably linearly interpolates days where they do not record a user's Twitter followers, and thus the apparent linearity is an artifact.

6.2. Objections to Intensional Insights staff 'liking' Intentional Insights content

In the course of the original discussion of Jeff's post on Facebook, numerous people took exception to staff or volunteers 'liking' or supporting InIn content. This criticism is misguided: this is common practice both for nonprofits generally and within the EA community: many EAs affiliated to a given group 'like' or share content without disclosing their affiliation. Although issues around appropriate disclosure can be subtle, acts like this on social media do not on reflection seem significant enough to warrant disclosure of interests to the authors of this document.

6.3. 'Paid likes' from clickfarms

In the February 2016 discussion it was suggested the Tsipursky might be directly paying for likes from clickfarms. However, as discussed in section 5.1, while the likes in question appear to have resulted from paid Facebook boosting, and to be from clickfarms, they were not directly paid for. Instead, the boosting attracted clickfarm likes through an accidental process explained well the linked Veritasium video.

7. Positives

In the course of research into and discussion around InIn, some facts that reflect well on InIn were discovered. These are listed below. We don't think this comprises all evidence favourable of InIn: the impact document, Tsipursky's post on the EA forum, and the Intentional Insights website offer further evidence. (We have not looked at these closely enough to have a view on them.)

7.1. Jon Behar

One TLYCS employee who was worked with Tsipursky on Giving Games says Tsipursky has made helpful introductions:

Behar is also quoted in the InIn EA impact doc as saying:

7.2. Additional donations

TLYCS has information indicating that Tsipursky's posts combined drove about two or three donations, and the Huffington Post article resulted in to donations to GiveDirectly totaling $500. Tracking donations is hard, so this is definitely an underestimate.

7.3. Placement of articles in TIME and the Huffington Post

Tsipursky's articles in TIME and the Huffington Post got lots of exposure for EA ideas. Additionally, being able to get articles placed there is impressive.

8. Policy responses from InIn

During discussions with Tsipursky regarding drafts of this document he mentioned some InIn policy changes made in response to the criticisms. This section does not reflect any other changes InIn may have made, primarily because we haven't been able to put in the time to follow up on each practice and see whether it has continued. We also note that Tsipursky provided additional information regarding Amazon sales, contractor names, and payment practices upon request for this document.

8.1. Post-criticism conflict-of-interest policy

Following the discussion under Jeff Kaufman's post in August 2016, InIn created a conflicts of interest policy document:

8.2. Post-criticism Facebook boosting

Tsipursky now states:

Regarding InIn social media policy, we are making sure to avoid boosting any more posts to clickfarm countries. We're generally not boosting posts right now to anyone but fans of the page who live in the US and other rich countries. We found we couldn't ban identifiable clickfarm accounts from the FB page, unfortunately, so we're being really cautious about boosting posts.

9. Disclosures

Many people contributed to this document, some of them anonymously. Below are disclosures from people who contributed substantially and want to be clear about any potential conflicts of interest. None of the individuals below contributed on behalf of an employer or organization, and their contributions should not be taken to imply any stance on the part of any organization with which they are affiliated.

10. Response comments from Gleb Tsipursky

Tsipursky has responded in the comments below: part one, part two, part three.


undefined @ 2016-10-24T18:30 (+50)

My fellow contributors and I aimed in this document to have as little of an 'editorial line' as possible: we were not all in complete agreement on what this should be, so thought it better to discuss the appropriate interpretation of the data we provide in the comments. I offer mine below: in addition to the disclaimers and disclosures above, I stress I am speaking for myself, and not on behalf of any other contributor.

I believe InIn and Tsipursky are toxic to the EA community. I strongly recommend that EAs do not spend time or money on InIn going forward, nor any future projects Tsipursky may initiate. Insofar as there may be ways for EA organisations to insulate themselves from InIn, I urge them to avail themselves of these opportunities.

A key factor in this extremely adverse judgement is my extremely adverse view of InIn's product. InIn's material is woeful: a mess of misguided messaging (superdonor, the t-shirts, 'effective giving' versus 'effective altruism', etc. etc.), crowbarred in aspirational pop-psychology 'insights', tacky design and graphics, and oleaginous self-promotion seeping through wherever it can (see, for example, the free sample of Gleb's erstwhile 'amazon bestseller'). Although mercifully little of InIn's content has anything to do with EA, whatever does reflects poorly on it (c.f. prior remarks about people collaborating with Tsipursky as a damage limitation exercise). I have yet to meet an EA with view of InIn's content better than mediocre-to-poor.

Due to this, the fact that the social 'reach' of InIn is mostly illusory may be a blessing in disguise: I am genuinely uncertain whether low-quality promotion of sort-of EA is better than nothing given it may add noise to higher quality signal notwithstanding the (likely fairly scant) counterfactual donations it may elicit. In any case, that it is illusory is a black mark against InIn's instrumental competencies necessary for being an effective outreach organisation.

What I find especially shocking is that this meagre output is the result of gargantuan amounts of time spent. Tsipursky states across assistants, volunteers, and staff, about 1000 hours are spent on InIn each week: if so, InIn is likely the leader among all EA orgs for hours spent - yet, by any measure of outputs, it is comfortably among the worst.

Would that it just be a problem of InIn being ineffective. The document above illustrates not only a wide-ranging pattern of at-best-shady practices, but a meta-pattern of Tsipursky persisting with these practices despite either being told not to or saying himself he wasn't doing them or won't do them again. This record challenging to reconcile with Tsipursky acting in good faith, although I can fathom the possibility given the breadth and depth of his incompetence. Regardless of intention, I am confident the pattern of dodgy behaviour will continue with at most cosmetic modification, and it will continue to prove recalcitrant to any attempts to explain or persuade Tsipursky of his errors.

These issues incur further costs to Effective Altruism. There are obvious risks that donors 'fall for' InIn's self-promotion and donate to it instead of something better. There are similar reputational risks of InIn's behaviour damaging the EA brand independent of any risks from its content. Internally, acts like this may act to burn important commons in how individuals and organisations interact in the EA community. Finally, although in part self-inflicted, monitoring and reporting these things sucks up time and energy from other activities: although my time is basically worthless, the same cannot be said for the other contributors.

In sum: InIn's message is at best a cargo cult version of EA with dubious value. Despite being an outreach organisation, it is incompetent at fundamental competencies for its mission. A shocking number of volunteer hours are being squandered. Tsipursky is incapable of conducting himself to commonsense standards of probity, leave alone higher ones that should apply to the leader of an EA organisation. This behaviour incurs further external and internal costs to the EA movement. I see essentially no prospect of these problems being substantially remediated such that InIn's benefit to the community outweigh its costs, still moreso such that it would be competitive with other EA groups or initiatives. Stay away.

[Edit: I previously said '[InIn] is comfortably the worst [in terms of outputs]', it has been pointed out there may be other groups with similarly poor performance, so I've (much belatedly) changed the wording.]

undefined @ 2016-10-25T13:50 (+7)

I suspect the reason InIn's quality is low is because, given their reputation disadvantage, they cannot attract and motivate the best writers and volunteers. I strongly relate to your concerns about the damage that could be done if InIn does not improve. I have severely limited my own involvement with InIn because of the same things you describe. My largest time contribution by far has been in giving InIn feedback about reputation problems and general quality. A while back, I felt demoralized with the problems, myself, and decided to focus more on other things instead. That Gleb is getting so much attention for these problems right now has potential to be constructive.

Gleb can't improve InIn until he really understands the problem that's going on. I think this is why Intentional Insights has been resistant to change. I hope I provided enough insight in my comment about social status instincts for it to be possible for us all to overcome the inferential distance.

I'm glad to see that so many people have come together to give Gleb feedback on this. It's not just me trying to get through to him by myself anymore. I think it's possible for InIn to improve up to standards with enough feedback and a lot of work on Gleb's part. I mean, that is a lot of work for Gleb, but given what I've seen of his interest in self-improvement and his level of dedication to InIn, I believe Gleb is willing to go through all of that and do whatever it takes.

Really understanding what has gone wrong with Intentional Insights is hard, and it will probably take him months. After he understands the problems better, he will need a new plan for the organization. All of that is a lot of work. It will take a lot of time.

I think Gleb is probably willing to do it. This is a man who has a tattoo of Intentional Insights on his forearm. Because I believe Gleb would probably do just about anything to make it work, I would like to suggest an intervention.

In other words, perhaps we should ask him to take a break from promoting Intentional Insights for a while in order to do a bunch of self-improvement, make his major updates and plan out a major version upgrade for Intentional Insights.

Perhaps I didn't get the memo, but I don't think we've tried organizing in order to demand specific constructive actions first before talking about shutting down Intentional Insights and/or driving Gleb out of the EA movement.

The world does need an org that promotes rationality to a broader audience... and rationalists aren't exactly known for having super people skills... Since Gleb is so dedicated and is willing to work really hard, and since we've all finally organized in public to do something about this, maybe we aught to try using this new source of leverage to heave him onto the right track.

undefined @ 2016-10-30T08:05 (+9)

Hello Kathy,

I have read your replies on various comment threads on this post. If you'll forgive the summary, your view is that Tsipursky's behaviour may arise from some non-malicious shortcomings he has, and that, with some help, these can be mitigated, thus leading InIn to behave better and do more good. In medicalese, I'm uncertain of the diagnosis, strongly doubt the efficacy of the proposed management plan, and I anticipate a bleak prognosis. As I recommend generally, I think your time and laudable energy is better spent elsewhere.

A lot of the subsequent discussion has looked at whether Tsipursky's behaviour is malicious or not. I'd guess in large part it is not: deep incompetence combined with being self-serving and biased towards ones org to succeed probably explain most of it - regrettably, Tsipursky's response to this post (e.g. trumped-up accusations against Jeff and Michelle, pre-emptive threats if his replies are downvoted, veiled hints at 'wouldn't it be bad if someone in my position started railing against EA', etc.) seem to fit well with malice.

Yet this is fairly irrelevant. Tsipursky is multiply incompetant: at creating good content, at generating interest in his org (i.e. almost all of its social media reach is ilusory), at understanding the appropriate ambit for promotional efforts, at not making misleading statements, and at changing bad behaviour. I am confident that any EA I know in a similar position would not have performed as badly. I highly doubt this can all be traced back to a single easy-to-fix flaw. Furthermore, I understand multiple people approached Tsipursky multiple times about these issues; the post documents problems occurring over a number of months. The outside view is not favourable to yet further efforts.

In any case, InIn's trajectory in the EA community is probably fairly set at this point. As I write this, InIn is banned from the FB group, CEA has officially disavowed it, InIn seems to have lost donors and prospective donations from EAs, and my barometer of 'EA public opinion' is that almost all EAs who know of InIn and Tsipursky have very adverse attitudes towards both. Given the understandable reticience of EAs towards corporate action like this, one can anticipate these decisions have considerable inertia. A nigh-Damascene conversion of Tsipursky and InIn would be required for these things to begin to move favourably to InIn again.

In light of all this, attempting to 'reform InIn' now seems almost as ill-starred as trying to reform a mismanaged version of homeopaths without borders: such a transformation is required to be surely worth starting afresh. The opportunity cost is also substantial as there are other better performing EA outreach orgs (i.e. all of them), which promise far greater returns on the margin for basically any return one migh be interested in. Please help them out instead.

undefined @ 2016-10-30T16:01 (+9)

I'm not completely sure what's going on with Gleb, but I feel a great deal of concern for people with Asperger's, and I think it made me overly sympathetic in this case. Thank you for this.

undefined @ 2016-10-30T18:10 (+13)

One thing to consider is that too much charity for Gleb is actively harmful for people with ASDs in the community.

If I am at a party of a trusted friend and know they've only invited people the trust, and someone hurts my feelings, I'm likely to ascribe it to a misunderstanding and talk it out with them. If I'm at a party where lots of people have been jerks to me before, and someone hurts my feelings, I'm likely to assume this person is a jerk too and withdraw.

By saying "I'm updating" and then committing the same problems again, Gleb is lessening the value of the words. He is teaching people it's not worth correcting others, because they won't change. This is most harmful to the people who most need the most direct feedback and the longest lead time to incorporate it.

undefined @ 2016-11-02T04:20 (+3)

Wow. More excellent arguments. More updates on my side. You're on fire. I almost never meet people who can change my mind this much. I would like to add you as a friend.

undefined @ 2016-10-25T03:01 (+6)

[This was originally a comment calling for Gleb to leave the EA community with various supporting arguments, but I've decided I don't endorse online discussions as a mechanism for asking people to leave EA. See this comment of mine for more.]

CarlShulman @ 2016-10-25T03:34 (+9)

When I first talked to Gleb about EA, he offered an objection I don't remember (something about it being too cold or too demanding). He became interested a while after I mentioned that EAs might want to fund his rational thinking outreach work. I'm not sure Gleb has ever given money to an EA charity that wasn't his own.

He wrote that he is a 'monthly donor' to CFAR.

On the other hand a cynic might note that he has used his interactions with CFAR to promote himself and his organization, e.g. his linked favorable review of CFAR comes with a few plugs for Intentional Insights, and CFAR (or rather the erroneous acronym-unpacking 'Center for Advanced Rationality') appeared as a collaboration in InIn promotional documents. My understanding is that the impression that he was aligned with CFAR (and EA) had also made some CFAR donors more open to InIn fundraising pitches.

He has also taken the Giving What We Can pledge, but I don't know what that means. He has said he and his wife fund most of InIn's budget (which would presumably be more than 10% of his income) and claims that it is highly effective, so might take that to satisfy his pledge.

[Disclosure: my wife is the executive director of CFAR, but I am speaking only for myself.]

undefined @ 2016-10-24T14:39 (+29)

Note: I am socially peripheral to EA-the-community and philosophically distant from EA-the-intellectual-movement; salt according to taste.

While I understand the motivation behind it, and applaud this sort of approach in general, I think this post and much of the public discussion I've seen around Gleb are charitable and systematic in excess of reasonable caution.

My first introduction to Gleb was Jeff's August post, read before there were any comments up, and it seemed very clear that he was acting in bad faith and trying to use community norms of particular communication styles, owning up to mistakes, openness to feedback, etc. to disarm those engaging honestly and enable the con to go on longer. I don't think I'm an especially untrusting person (quite the opposite, really), but even if that's the case nearly every subsequent revealed detail and interaction confirmed this. Gleb responds to criticism he can't successfully evade by addressing it in only the most literal and superficial manner, and continues on as before. It is to the point that if I were Gleb, and had somehow honestly stumbled this many times and fell into this pattern over and over, I would feel I had to withdraw on the grounds that no one external to my own thought processes could possibly reasonably take me seriously and that I clearly had a lot of self-improvement to do before engaging in a community like this in the future.

The responses to this behavior that I've seen are overwhelmingly of the form of taking Gleb seriously, giving him the benefit of the doubt where none should exist, providing feedback in good faith, and responding positively to the superficial signs Gleb gives of understanding. This is true even for people who I know have engaged with him before. I'm not completely confident of this, but the pattern looks like people are applying the standards of charity and forgiveness that would be appropriate for any one of these incidences in isolation, not taking into account that the overall pattern of behavior makes such charitable interpretations increasingly implausible. On top of that, some seem to have formed clear final opinions that Gleb is not acting in good faith, yet still use very cautious language and are hesitant to take a single step beyond what they can incontrovertibly demonstrate to third parties.

A few examples from this post, not trying to be comprehensive:

Moreover, the fully comprehensive nature of the post and the painstaking lengths it goes to separate out definitely valid issues from potentially invalid ones seems to be part of the same pattern. No one, not even Gleb, is claiming that these instances didn't happen or that he is being set up, yet this post seems to be taking on a standard appropriate for an adversarial court of law.

And this is a problem, because in addition to wasting people's time it causes people less aware of these issues to take Gleb more seriously, encourages him to continue behaving as he has been, and I suspect in some cases inclines even the more knowledgeable people involved to trust Gleb too much in the future, despite whatever private opinions they may have of his reliability. At some point there needs to be a way for people to say "no, this is enough, we are done with you" in the face of bad behavior; in this case if that is happening at all it is being communicated behind-the-scenes or by people silently failing to engage. That makes it much harder for the community as a whole to respond appropriately.

undefined @ 2016-10-24T16:28 (+33)

I take your point as "aren't we being too nice to this guy?" but I actually really like the approach taken here, which seems extremely fair-minded and diligent. My suspicion is this sort of stuff is long-term really valuable because it establishes good norms for something that will likely recur in future. I'd be much more inclined to act with honesty if I believed people would do an extremely thorough public invesitigation into everything I'd said, rather than just calling me names and walking away.

undefined @ 2016-10-25T03:10 (+3)

I'd be much more inclined to act with honesty if I believed people would do an extremely thorough public invesitigation into everything I'd said, rather than just calling me names and walking away.

I don't understand what you're claiming here. Are you saying you'd be honest in a community if you thought it would investigate you a lot to determine your honesty, but dishonest otherwise? Why not just be honest in all communities, and leave the ones you don't like?

undefined @ 2016-10-25T11:23 (+6)

I think he means that it is human behaviour to do that, not that he does it himself.

undefined @ 2016-10-25T23:34 (+1)

I literally still don't understand. I can understand the motivation to be an asshole in communities you think won't treat you fairly, but why be a lying asshole? I think the OP wrote "honesty" and meant something else.

undefined @ 2016-10-25T23:45 (+1)

I think the common point of intervention for people telling mis-truths, is not holding themselves back when they don't really have enough evidence. A person might be about to write of a quick reply, and in most communities, know that they're not going to be held accountable for any mischaracterisations of others' opinions, or referring inaccurately to studies and data. In those communities, the comments are awful. In communities where you know that, if you do this over a sustained period, Carl Shulman, Jeff Kaufman, Oliver Habryka, Gregory Lewis and more are gonna write tens of thousands of words documenting your errors, you'll be more likely to note when you haven't quite substantiated the comment you're about to hit 'send' on.

undefined @ 2016-10-26T17:49 (+3)

There's an important difference between repeatedly making errors, jumping to conclusions, or being attached to a preconceived notion (all of which which I've personally done in front of Carl plenty of times), and the sort of behavior described in the OP, which seems more like intentional misrepresentation for the sake of climbing a social status gradient.

undefined @ 2016-10-24T19:25 (+27)

I'd like to agree partially with MichaelPlant and Paul_Crowley, in so far as I'm glad that I'm part of a community that responds to problems in such a charitable and diligent manner. However, I feel they missed the most important point of shlevy's comment. Without arguing for a less fair-mined and thoughtful response, we can still ask the following: Gleb started InIn back in 2014; why did it take us two years to get to the point where we were able to call him out on his bad behaviour? This could've been called out much earlier.

I think the answer looks like this:

Firstly, Gleb has learned the in-group signals of communicating in good-faith (for example, at every criticism, he says he has "updated", and he says 'thank you' for criticism). This alone is not a problem - it would merely take a few people to realise this, call it out, and then he could be asked to leave the community.

There's a second part however, which is that once a person has learned (from experience) that Gleb is acting in bad faith, the next time that person comes to the discussion, everybody else sees the standard signals of good-faith communication, and as such the person may be hesitant to treat Gleb as they would treat someone else who was clearly acting in bad faith. This is because they would be seen as unnecessarily harsh by people without the background experiences - as was seen multiple times in the original Facebook thread, when people (who did not have the past experience with Gleb) were confused by the harshness of the criticism, and criticised the tone of the conversation. My guess for the fundamental reason that we are having this conversation now, is that Jeff Kaufman bravely made his beliefs about Gleb common knowledge - he made a blog post about InIn, after which everyone else realised "Oh, everyone else believes this too. I'm not worried any more that everyone will think negatively of me for acting as though Gleb is acting in bad faith. I will now let out the piled up problems I have with Gleb's behaviour."

To re-iterate, it's delightful to be part of a community that responds to this sort of situation by spending ~100s of hours (collectively) and ~100k words (I'm counting the original Facebook thread as well as the post here) analysing the situation and producing a considered, charitable yet damning report. However, it's important to realise that there are communities out there for whom Gleb would've been outed in months rather than years, and without the time of many top researchers in the community wasted.

I'm not sure what the correct norms to have are. I'd suggest that we should be more trusting that when someone in the community criticises someone else not in the community, they're doing it for good reasons. However, writing that out is almost self-refuting - that's what all insular communities are doing. Perhaps appointing a small group of moderators for the community to whom we trust. That's how good online communities often work, perhaps the model can be extended to the EA community (which is significantly more than just an online community). I certainly want to sustain the excellent norms of charity, diligence and respect that we currently have, something necessary to any successful intellectual project.

undefined @ 2016-10-26T16:36 (+26)

I just want to highlight that I feel like part of this post is based on a false premise; you mention InIn was started in 2014. While that may be true, all of the incidents in EA (and Less Wrong) circles cited above date to November 2015 or later. Gleb's very first submission in the EA forum is in October 2015. By saying 'it took two years' and then talking about 'months rather than years' you give the impression that Gleb could have been excluded sometime back in 2015 and would have been elsewhere, which I think is pretty misleading (though presumably unintentionally so).

The truth is that it took a little over 9 months from Gleb's first post to Jeff's major public criticism. 9 months and a decent amount of time is not trivial. But let's not overstate the problem.

"There's a second part however, which is that once a person has learned (from experience) that Gleb is acting in bad faith, the next time that person comes to the discussion, everybody else sees the standard signals of good-faith communication, and as such the person may be hesitant to treat Gleb as they would treat someone else who was clearly acting in bad faith. This is because they would be seen as unnecessarily harsh by people without the background experiences - as was seen multiple times in the original Facebook thread, when people (who did not have the past experience with Gleb) were confused by the harshness of the criticism, and criticised the tone of the conversation."

I do strongly agree with this. I had some very frustrating conversations around that thread.

undefined @ 2016-10-25T02:48 (+7)

Pretty much agree with you and shlevy here, except that the wasting hundreds of collective hours carefully checking that Gleb is acting in bad faith seems more like a waste to me.

If the EA community were primarily a community that functioned in person, it would be easier and more natural to deal with bad actors like Gleb; people could privately (in small conversations, then bigger ones, none of which involve Gleb) discuss and come to a consensus about his badness, that consensus could spread in other private smallish then bigger conversations none of which involve Gleb, and people could either ignore Gleb until he goes away, or just not invite him to stuff, or explicitly kick him out in some way.

But in a community that primarily functions online, where by default conversations are public and involve everyone, including Gleb, the above dynamic is a lot harder to sustain, and instead the default approach to ostracism is public ostracism, which people interested in charitable conversational norms understandably want to avoid. But just not having ostracism at all isn't a workable alternative; sometimes bad actors creep into your community and you need an immune system capable of rejecting them. In many online communites this takes the form of a process for banning people; I don't know how workable this would be for the EA community, since my impression is that it's spread out across several platforms.

undefined @ 2016-10-26T17:51 (+10)

Seems worth establishing the fact that bad actors exist, will try to join our community, and engage in this pattern of almost plausibly deniable shamelessly bad behavior. I think EAs often have a mental block around admitting that in most of the world, lying is a cheap and effective strategy for personal gain; I think we make wrong judgments because we're missing this key fact about how the world works. I think we should generalize from this incident, and having a clear record is helpful for doing so.

undefined @ 2016-10-25T08:16 (+3)

Yes! But... you said your opening line as though it disagreed somehow? I said:

it's important to realise that there are communities out there for whom Gleb would've been outed in months rather than years, and without the time of many top researchers in the community wasted.

undefined @ 2016-10-25T23:35 (+4)

I may be misinterpreting you here; you wrote

To re-iterate, it's delightful to be part of a community that responds to this sort of situation by spending ~100s of hours (collectively) and ~100k words (I'm counting the original Facebook thread as well as the post here) analysing the situation and producing a considered, charitable yet damning report.

and while I think this behavior is in some sense admirable, I think it is on net not delightful, and the huge waste of time it represents is bad on net except to the extent that it leads to better community norms around policing bad actors.

undefined @ 2016-10-25T23:41 (+3)

Yup, we are in agreement.

(I was just noting how sweet it was that we do this much more kindly than most other communities. It's totally not optimal though.)

undefined @ 2016-10-25T12:06 (+1)

I'd suggest that we should be more trusting that when someone in the community criticises someone else not in the community, they're doing it for good reasons. However, writing that out is almost self-refuting - that's what all insular communities are doing.

Yes, insofar communities do that, but typically in emotive and highly biased ways. EA at least has more constructive norms for how these things are discussed. It's not perfect, and it's not fast, but here I see people taking pains to be as fair-minded as they can be. (We achieve that to different degrees, but the effort is expected.)

Perhaps appointing a small group of moderators for the community to whom we trust.

My System 1 doesn't like this. Giving this power to a group of people and suggesting that we accept their guidance... that feels cultish, and not very compatible with a community of critical thinkers.

undefined @ 2016-10-25T12:17 (+14)

Scientific departments have ethics boards. Good online communities (e.g. Hacker News) have moderators. Society as a whole has a justice part of governance, and other groups that check on the decisions made by the courts. Suggesting that it feels cult-y to outsource some of our community norm-enfacement (so as to save the community as a whole significant time input, and make the process more efficient and effective) is... I'm just confused every time someone calls something totally normal 'cult-y'.

undefined @ 2016-10-26T11:15 (+2)

I deliberately said "My System 1 doesn't like this." and "that feels cultish" – on an intuitive level, I feel uncomfortable, and I'm trying to work out why. I do see value in having effective gatekeepers.

I'm not even sure what it means to be "banned" from a movement consisting of multiple organisations and many individuals. It may be that if the process is clearly defined, and we know who is making the decision, on whose behalf, I'd be more comfortable with it.

undefined @ 2016-10-28T15:41 (+3)

Thanks for clarifying!

Just in case you're interested: I think the word 'cultish' is massively overloaded (with negative connotations) and mis-used. I'd also point out that saying that a statement is one's gut feeling isn't equivalent to saying one doesn't endorse the feeling, and so I felt pretty defensive when you suggested my idea was cultish and not compatible with our community.

I wrote this because I thought you might prefer to know the impacts of your comments rather than not hearing negative feedback. My apologies in advance if that was a false assumption.

undefined @ 2016-10-31T00:02 (+3)

Thanks – helpful feedback (and from Owen also). In hindsight I would probably have kept the word "cultish" while being much more explicit about not completely endorsing the feeling.

undefined @ 2016-10-28T17:18 (+1)

Something went wrong with the communication channel if you ended up feeling defensive.

However, despite generally agreeing with you about problems with the world "cultish", I actually think this is a reasonable use-case. It has a lot of connotations, and it was being reported that the description was triggering some of those connotations in the reader. That's useful information that it may be worth some effort to avoiding it being perceived that way if the idea is pursued (your stack of examples make it pretty clear that it is avoidable).

undefined @ 2016-10-24T15:24 (+21)

I think being too nice is a failure mode worth worrying about, and your points are well taken. On the other hand, it seems plausible to me that it does a more effective job of convincing the reader that Gleb is bad news precisely by demonstrating that this is the picture you get when all reasonable charity is extended.

undefined @ 2016-10-28T06:03 (+9)

Shlevy, I think I might actually agree with everything you said here with the exception of the characterization of Intentional Insights as a "con".

I can see the behavior on the outside very clearly. On the outside Gleb has said a list full of incorrect things.

On the inside, the picture is not so clear. What's going on inside his head?

If this is a con, what in the world does he want? He can't seem to make money off of this. Con artists have a tendency to do very, very quick things, with a very very low amount of effort, hoping to gain some disproportionate reward. Gleb is doing the opposite. He has invested an enormous amount of time (Not to mention a permanent Intentional Insights tattoo!) and (as far as I know) has been concerned about finances the whole time. He's not making a disproportionate amount of money off of this... and spreading rationality doesn't even look like one of those things which a con artist could quickly do for a disproportionate reward... so I am confused.

If I thought Intentional Insights was a con, I'd be right with you trying to make that more obvious to everyone... but I launched my con detector and that test was negative.

Maybe you use a different con detector. Maybe, to you, it is irrelevant whether Gleb is intentionally malicious or merely incompetent. Perhaps you would use the word "con" either way just as people use the word "troll" either way.

For the same reasons that we should face the fact that there's a major problem with the inaccuracies Intentional Insights outputs, I think we aught to label the problem we're seeing with Intentional Insights as accurately as possible.

Whether Gleb is incompetent or malicious is really important to me. If Gleb is doing this because of a learning disorder, I would really like to see more mercy. According to Wikipedia's page on psychological trauma, there are a lot of things about this post which Gleb may be experiencing as traumatic events. For instance: humiliation, rejection, and major loss. (https://en.wikipedia.org/wiki/Psychological_trauma)

As some kind of weird hybrid between a bleeding heart and a shrewd person, I can't justify anything but minimizing the brutality of a traumatic event for someone with a learning disorder, no matter how destructive it is. At the same time, I agree that ousting destructive people is a necessity if they won't or can't change, but I think in the case of an incompetent person, there are a lot of ways in which the community has been too brutal. In the event of a malicious con, we've been too charitable, and I'm guilty of this as well. If Gleb really is a con artist, we should be removing him as fast as possible. I just don't see strong evidence that the problem he has is intentional, nor does it even seem to be clearly differentiated from terrible social skills and general ignorance about marketing.

Our response is too brutal for someone with a learning disorder or other form of incompetence, and it's too charitable for a con artist. In order to move forward, I think perhaps we aught to stop and solve this disagreement.

Here's what's at stake: currently, I intend to advocate for an intervention*. If you convince me that he is a con artist, I will abandon this intent and instead do what you are doing. I'll help people see the con.

/ (By intervention, I mean: encouraging everyone to tell Gleb we require him to shape up or ship out, to negotiate things like what we mean by shape up and how we would like him to minimize risk while he is improving. If he has a learning disorder, a bit of extra support could go a long way if* the specific problems are identified so the support can target them accurately. I suspect that Gleb needs to see a professional for a learning disorder assessment, especially for Asperger's.)

I'm open to being convinced that Intentional Insights actually does qualify as some type of con or intends net negative destructive behavior. I don't see it, but I'd like to synchronize perspectives, whether I "win" or "lose" the disagreement.

undefined @ 2016-10-28T07:21 (+23)

I don't think incompetent and malicious are the only two options (I wouldn't bet on either as the primary driver of Gleb's behavior), and I don't think they're mutually exclusive or binary.

Also, the main job of the EA community is not to assess Gleb maximally accurately at all costs. Regardless of his motives, he seems less impactful and more destructive than the average EA, and he improves less per unit feedback than the average EA. Improving Gleb is low on tractability, low on neglectedness, and low on importance. Spending more of our resources on him unfairly privileges him and betrays the world and forsakes the good we can do in it.

Views my own, not my employer's.

undefined @ 2016-10-30T15:39 (+5)

That was a truly excellent argument. Thank you.

undefined @ 2016-10-30T22:58 (+2)

Thanks Kathy!

undefined @ 2016-10-25T21:06 (+16)

In the original facebook thread I was highly critical of intentional insights, I have not read all the followup here yet, but I would like to note that after that thread the next "thing" I saw from Intentional Insights was this post about EA marketing. I thought that was a highly competent and interesting contribtuion to the EA community. All of the ongoing concerns about II may stand - but there is clearly a few people associated with the org who have valuable contributions to make to the future of the community,

undefined @ 2016-10-25T03:30 (+16)

The most embarrassing aspect of the exclusionary, witch hunt, no-due-diligence point of view which some people are advocating in the comments here is that it probably would have merited the early and permanent exclusion of the Singularity Institute/MIRI from the EA community. Holden wrote a blog on LessWrong saying that he didn't like their organization and didn't think they were worth funding. Some assorted complaints have been floating around the web for a long time complaining about them associating with neoreactionaries and about LessWrong being cultists as well as complaints about the way they communicate and write. There's been a few odd 'incidents' (if you can call them that) over the years between MIRI, LessWrong, and the rationalist sphere. It would be easy to jumble all of that together into some kind of meta-post documenting concerns, and there is certainly no shortage of people who are willing and able to write long impassioned posts expressing their feelings and saying that they want nothing to do with SIAI/MIRI and recommending others to adhere to that. We could have done that, lots of people would come out of the woodwork to add their own complaints, the conversation would reach critical mass, and boom - all of a sudden, half the steam behind AI safety goes down the tubes.

It's easy to find online communities today where people are mind-numbingly dismissive of anything AI-related due to a poorly-argued, critical-mass groupthink against everything LessWrong. Good thing that we're not one of them.

undefined @ 2016-10-25T15:35 (+13)

I agree that it's important that EA stay open to weird things and not exclude people solely for being low status. I see several key distinctions between early SI/early MIRI and Intentional Insights:

  • SI was cause focused, II a fundraising org. Causes can be argued on their merits. For fundraising, "people dislike you for no reason" is in and of itself evidence you are bad at fundraising and should stop.
  • I think this is an important general lesson. Right now "fundraising org" seems to be the default thing for people to start, but it's actually one of the hardest things to do right and has the worst consequences if it goes poorly. With the exception of local groups, I'd like to see the community norms shift to discourage inexperienced people from starting fundraising groups.
  • AFAIK, SI wasn't trying to use the credibility of the EA movement to bolster itself . Gleb is, both explicitly (by repeatedly and persistently listing endorsements he did not receive) and implicitly. As long as he is doing that the proportionate response is criticizing him/distancing him from EA enough to cancel out the benefits.
  • The effective altruism name wasn't worth as much when MIRI was getting started. There was no point in faking an endorsement because no one had heard of us. Now that EA has some cachet with people outside the movement there exists the possibility of trying to exploit that cachet, and it makes sense for us to raise the bar on who gets to claim endorsement.
undefined @ 2016-10-25T05:00 (+13)

I don't think this comparison holds water. Briefly, I think SI/MIRI would have mostly attracted criticism for being weird in various ways. As far as I can tell, Gleb is not acting weird; he is acting normal in the sense that he's making normal moves in a game (called Promote-Your-Organization-At-All-Costs) that other people in the community don't want him playing, especially not in a way that implicates other EA orgs by association.

Whatever you think of that object-level point, an independent meta-level point: it's also possible that the EA movement excluding SI/MIRI at some point would have been a reasonable move in expectation. Any policy for deciding who to kick out necessarily runs the risk of both false positives and false negatives, and pointing out that a particular policy would have caused some false positive or false negative in the past is not a strong argument against it in isolation.

undefined @ 2016-10-25T05:58 (+1)

Briefly, I think SI/MIRI would have mostly attracted criticism for being weird in various ways.

They've attracted criticism for more substantial reasons; many academics didn't and still don't take them seriously because they have an unusual point of view. And other people believe that they are horrible people who are in between neoreactionary racists and a Silicon Valley conspiracy to take people's money. It's easy to pick up on something being a little off-putting and then get carried down the spiral of looking for and finding other problems. The original and underlying reason people have been pissed about InIn this entire time is that they are aesthetically displeased by their content. "It comes across as spammy and promotional". An obvious typical mind fallacy. If you can fall for that then you can fall for "Eliezer's writing style is winding and confusing."

it's also possible that the EA movement excluding SI/MIRI at some point would have been a reasonable move in expectation.

Highly implausible.

AI safety is a large issue. MIRI has done great work and has itself benefited tremendously from its involvement. Besides that, there have been many benefits to EA for aligning with rationalists more generally.

Any policy for deciding who to kick out necessarily runs the risk of both false positives and false negatives, and pointing out that a particular policy would have caused some false positive or false negative in the past is not a strong argument against it in isolation.

Yes, but people are taking this case to be a true positive that proves the rule, which is no better.

undefined @ 2016-10-27T13:27 (+7)

Some of the criticisms I've read of MIRI are so nasty that I hesitate to rehash them all here for fear of changing the subject and side tracking the conversation. I'll just say this:

MIRI has been accused of much worse stuff than this post is accusing Gleb of right now. Compared to that weird MIRI stuff, Gleb looks like a normal guy who is fumbling his way through marketing a startup. The weird stuff MIRI / Eliezer did is really bizarre. For just one example, there are places in The Sequences where Eliezer presented his particular beliefs as The Correct Beliefs. In the context of a marketing piece, that would be bad (albeit in a mundane way that we see often), but in the context of a document on how to think rationally, that's more like... egregious blasphemy. It's a good thing the guy counter-balanced whatever that behavior was with articles like "Screening Off Authority" and "Guardians of the Truth".

Do some searches for web marketing advice sometime, and you'll see that Gleb might have actually been following some kind of instructions in some of the cases listed above. Not the best instructions, mind you... but somebody's serious attempt to persuade you that some pretty weird stuff is the right thing to do. This is not exactly a science... it's not even psychology. We're talking about marketing. For instance, paying Facebook to promote things can result in problems... yet this is recommended by a really big company, Facebook. :/

There are a few complaints against him that stand out as a WTF... (Then again, if you're really scouring for problems, you're probably going to find the sorts of super embarrassing mistakes people only make when they're really exhausted or whatever. I don't know what to make of every single one of these examples yet.)

Anyway, MIRI / Eliezer can't claim stuff like "I was following some marketing instructions I read on the Internet somewhere.", which, IMO, would explain a lot of this stuff that Gleb did - which is not to say I think copying him is an effective or ethical way of promoting things! The Eliezer stuff was, like self-contradictory enough that it was weird to the point of being original. It took me forever to figure that guy out. There were several years where I simply had no cogent opinion on him.

The stuff Gleb is doing is just so commonly bad. It's not an excuse. I still want to see InIn shape up or ship out. I think EA can and should have higher standards than this. I have read and experienced a lot in the area of promoting things, and I know there are ways of persuading through making people think that don't bias them or mislead them, but by getting them more in touch with reality. I think it takes a really well thought out person to accomplish that because seeing reality is only the first step... then, you need to know how to deal with it, and you need to encourage the person to do something constructive with the knowledge as well. Sometimes bare information can leave people feeling pretty cynical, and it's not like we were all taught how to be creative and resourceful and lead ourselves in situations that are unexpectedly different from what we believed.

I really believe there are better ways to be memorable other than making claims about how much attention you're getting. Providing questionable info of this type is certainly bad. The way I'm seeing it, wasting time on such uninspired attempts involves such a large quantity of lost potential that questionable info is almost silly by comparison. I feel like we're worried about a guy who says he has the best lemonade stand ever, but what we should be worried about is why he hasn't managed to move up to selling at the grocery store yet.

I can very clearly envision the difference between what Gleb has been doing, and specific awesome ways in which it is possible to promote rationality. I can't condemn Gleb as some sort of bad guy when what he's doing wrong betrays such deep ignorance about marketing. I feel like: surely, a true villain would have taken over the beverage aisle at the grocery store by now.

undefined @ 2016-10-25T04:27 (+12)

The most embarrassing aspect of the exclusionary, witch hunt, no-due-diligence point of view which some people are advocating in the comments here

I see insight in what Qiaochu wrote here:

If the EA community were primarily a community that functioned in person, it would be easier and more natural to deal with bad actors like Gleb; people could privately (in small conversations, then bigger ones, none of which involve Gleb) discuss and come to a consensus about his badness, that consensus could spread in other private smallish then bigger conversations none of which involve Gleb, and people could either ignore Gleb until he goes away, or just not invite him to stuff, or explicitly kick him out in some way.

But in a community that primarily functions online, where by default conversations are public and involve everyone, including Gleb, the above dynamic is a lot harder to sustain, and instead the default approach to ostracism is public ostracism, which people interested in charitable conversational norms understandably want to avoid. But just not having ostracism at all isn't a workable alternative; sometimes bad actors creep into your community and you need an immune system capable of rejecting them. In many online communites this takes the form of a process for banning people; I don't know how workable this would be for the EA community, since my impression is that it's spread out across several platforms.

Right now we don't have a procedure set up for formally deciding whether a particular person is a bad actor. If someone feels that another person is a bad actor, the only way to deal with the situation is informally. Since the community largely functions online, the discussion has a "witch hunt" character to it.

I think most people agree that bad actors exist, and we should have the capability to kick them out in principle (even if we don't want to use it in Gleb's particular case). But I agree that online discussions are not the best way to make these decisions. I've spent some time thinking about better alternatives, and I'll make a top-level post outlining my proposal if this comment gets at least +4.

Edit: Alternatively, for people who feel it should be possible to oust a person like Gleb with less effort, a formal procedure could streamline this kind of thing in the future.

CarlShulman @ 2016-10-26T18:15 (+13)

[ETA: a number of these comments are addressed to possible versions of this that John is not advocating, see his comment replying to mine.]

My attitude on this is rather negative, for several reasons:

  • The movement is diverse and there is no one to speak for all of it with authority, which is normal for intellectual and social movements
  • Individual fora have their moderation policies, individual organizations can choose who to affiliate with or how to authorize use of their trademarks, individuals can decide who to work with or donate to
  • There was no agreed-on course of action among the contributors to this document, let alone the wider EA community
  • Public discussion (including criticism) allows individual actors to make their own decisions
  • There are EAs collaborating with InIn on projects like secular Giving Games who report reaping significant benefits from that interaction, such as Jon Behar in the OP document; I don't think others are in a position to ask that they cut off such interactions if they find them valuable
  • I think the time costs of careful discussion and communication are important ones to pay for procedural justice and trust: I would be very uncomfortable with (and not willing to give blind trust to) a non-transparent condemnation from such a process, and I think it would reflect badly on those involved and the movement as a whole
  • If one wants to avoid heated online discussions , flame wars, and whatnot, they would be elicited by the outputs of the formal process (moreso, if less transparent and careful, I think)
undefined @ 2016-10-27T05:56 (+5)

The movement is diverse and there is no one to speak for all of it with authority, which is normal for intellectual and social movements

But controversial decisions will still need to be made--about who to ban from the forum, say. As EA gets bigger, I see advantages to setting up some sort of due process (if only so the process can be improved over time) vs doing things in an ad hoc way.

There was no agreed-on course of action among the contributors to this document, let alone the wider EA community

Well, perhaps an official body would choose some kind of compromise action, such as what you did (making knowledge about Gleb's behavior public without doing anything else). I don't see why this is a compelling argument for an ad hoc approach.

Public discussion (including criticism) allows individual actors to make their own decisions

Without official means for dealing with bad actors, the only way to deal with them is by being a vigilante. The person who chooses to act as a vigilante will be the one who is the angriest about the actions of the original bad actor, and their response may not be proportionate. Anyone who sees someone else being a vigilante may respond with vigilante action of their own if they feel the first vigilante action was an overreach. The scenario I'm most concerned about is a spiral of vigilante action based on differing interpretations of events. A respected official body could prevent the commons from being burned in this way.

There are EAs collaborating with InIn on projects like secular Giving Games who report reaping significant benefits from that interaction, such as Jon Behar in the OP document; I don't think others are in a position to ask that they cut off such interactions if they find them valuable

I don't (currently) think it would be a good idea for an official body to make this kind of request. Actually, I think an official committee would be a good idea even if it technically had no authority at all. Just formalizing a role for respected EAs whose job it is to look in to these things seems to me like it could go a long way.

I think the time costs of careful discussion and communication are important ones to pay for procedural justice and trust: I would be very uncomfortable with (and not willing to give blind trust to) a non-transparent condemnation from such a process, and I think it would reflect badly on those involved and the movement as a whole

OK, let's make it transparent then :) The question here is formal vs ad hoc, not transparent vs opaque.

If one wants to avoid heated online discussions , flame wars, and whatnot, they would be elicited by the outputs of the formal process (moreso, if less transparent and careful, I think)

If I see a long post on the EA forum that explains why someone I know is bad for the movement, I need to read the entire post to determine whether it was constructed in a careful & transparent way. If the person is a good friend, I might be tempted to skip reading the post and just make a negative judgement about its authors. If the post is written by people whose job is to do things carefully and transparently (people who will be fired if they do this badly), it's easier to accept the post's conclusions at face value.

CarlShulman @ 2016-10-27T06:40 (+8)

The person who chooses to act as a vigilante will be the one who is the angriest about the actions of the original bad actor, and their response may not be proportionate. Anyone who sees someone else being a vigilante may respond with vigilante action of their own if they feel the first vigilante action was an overreach. The scenario I'm most concerned about is a spiral of vigilante action based on differing interpretations of events. A respected official body could prevent the commons from being burned in this way.

This is a very good point. One reason I got involved in the OP was to offset some of this selection effect. On the other hand, I was also reluctant to involve EA institutions to avoid dragging them into it (I was not expecting Will MacAskill's post or the announcement by the EA Facebook group moderators, and mainly aiming at a summary of the findings for individuals). A respected institution may have an easier time in an individual case, but it may also lose some of its luster by getting involved in disputes.

Regarding your other points, I agree many of the things I worry about above (transparency, nonbinding recommendations, avoiding boycotts and overreach) can potentially be separated from official vs private/ad hoc. However a more official body could have more power to do the things I mention, so I don't think the issues are orthogonal.

undefined @ 2016-10-27T17:11 (+5)

Regarding your other points, I agree many of the things I worry about above (transparency, nonbinding recommendations, avoiding boycotts and overreach) can potentially be separated from official vs private/ad hoc. However a more official body could have more power to do the things I mention, so I don't think the issues are orthogonal.

True, but I suspect the worst case scenario for an official body is still less bad than the worst case scenario for vigilantism. Let's say we set up an Effective Altruism Association to be the governing body for effective altruism. Let's say it becomes apparent over time that the board of the Effective Altruism Association is abusing its powers. And let's say members of the board ignore pressure to step down, and there's nothing in the Association's charter that would allow us to fix this problem. Well at that point, someone can set up a rival League of Effective Altruists, and people can vote with their feet & start attending League-sponsored events instead of Association-sponsored events. This sounds to me like an outcome that would be bad, but not catastrophic in the way spiraling vigalantism has been for communities demographically similar to ours devoted to programming, atheism, video games, science fiction, etc. If anything, I am more worried about the case where the Association's board is unable to do anything about vigilantism, or itself becomes the target of a hostile takeover by vigilantes.

I suspect a big cause of disagreement here is that in America at least, we've lost cultural memories about how best to organize ourselves.

When Tocqueville visited the United States in the 1830s, it was the Americans' propensity for civic association that most impressed him as the key to their unprecedented ability to make democracy work. "Americans of all ages, all stations in life, and all types of disposition," he observed, "are forever forming associations. There are not only commercial and industrial associations in which all take part, but others of a thousand different types--religious, moral, serious, futile, very general and very limited, immensely large and very minute... Nothing, in my view, deserves more attention than the intellectual and moral associations in America."

...

Within all educational categories, total associational membership declined significantly between 1967 and 1993. Among the college-educated, the average number of group memberships per person fell from 2.8 to 2.0 (a 26-percent decline); among high-school graduates, the number fell from 1.8 to 1.2 (32 percent); and among those with fewer than 12 years of education, the number fell from 1.4 to 1.1 (25 percent). In other words, at all educational (and hence social) levels of American society, and counting all sorts of group memberships, the average number of associational memberships has fallen by about a fourth over the last quarter-century.

From the essay Bowling Alone: America's Declining Social Capital (15K citations on Google Scholar). You can read the essay for info on big drops in participation for churches, unions, PTAs, and civic/fraternal organizations.

undefined @ 2016-10-25T05:53 (+2)

I don't think formal procedures are likely to be followed and I don't think it's generally sensible to go to all the trouble of building an explicit policy to kick people out of EA. It's a terrible idea that contributes to the construction of a flawed social movement which obsessively cares about weird drama that, to those on the outside, looks silly. Outside view sanity check: which other social movements have a formal process for excluding people? None of them. Except maybe scientology.

I'm not against online discussions on a structural level. I think they're fine. I'm against the policy of banding together, starting faction warfare, and demanding that other people refrain from associating with somebody.

undefined @ 2016-10-25T08:01 (+8)

I don't think formal procedures are likely to be followed

The impression I get from Jeff's post is that the people involved took great pains to be as reasonable as possible. They don't even issue recommendations for what to do in the body of the post--they just present observations. This after ~2000 edits over the course of more than two months. This makes me think they'd have been willing to go to the trouble of following a formal procedure. Especially if the procedure was streamlined enough that it took less time than what they actually did.

I don't think it's generally sensible to go to all the trouble of building an explicit policy to kick people out of EA

My recommendations are about how to formally resolve divisive disputes in general. If divisive disputes constitute existential threats to the movement, it might make sense to have a formal policy for resolving them, in the same way buildings have fire extinguishers despite the low rate of fires. Also, I took in to account that my policy might be used rarely or never, and kept its maintenance cost as low as possible.

It's a terrible idea that contributes to the construction of a flawed social movement which obsessively cares about weird drama that, to those on the outside, looks silly.

Drama seems pretty universal--I don't think it can be wished away.

Outside view sanity check: which other social movements have a formal process for excluding people? None of them. Except maybe scientology.

There are a lot of other analogies a person could make: Organizations fire people. States imprison people. Online communities ban people. Everyone needs to deal with bad actors. If nothing else, it'd be nice to know when it's acceptable to ban a user from the EA forum, Facebook group, etc.

I'm not especially impressed with the reference class of social movements when it comes to doing good, and I'm not sure we should do a particular thing just because it's what other social movements do.

I keep seeing other communities implode due to divisive internet drama, and I'd rather this not happen to mine. I would at least like my community to find a new way to implode. I'd rather be an interesting case study for future generations than an uninteresting one.

I'm against the policy of banding together, starting faction warfare, and demanding that other people refrain from associating with somebody.

So what's the right way to take action, if you and your friends think someone is a bad actor who's harming your movement?

undefined @ 2016-10-25T13:22 (+1)

The impression I get from Jeff's post is that the people involved took great pains to be as reasonable as possible. They don't even issue recommendations for what to do in the body of the post--they just present observations. This after ~2000 edits over the course of more than two months. This makes me think they'd have been willing to go to the trouble of following a formal procedure.

I mean for the community as a whole, to say, "oh, look, our thought leaders decided to reject someone - ok, let's all shut them out."

Drama seems pretty universal--I don't think it can be wished away.

There's the normal kind of drama which is discussed and moved past, and the weird kind of drama like Roko's Basilisk which only becomes notable through obsessive overattention and collective self-consciousness. You can choose which one you want to have.

There are a lot of other analogies a person could make: Organizations fire people. States imprison people. Online communities ban people. Everyone needs to deal with bad actors. If nothing else, it'd be nice to know when it's acceptable to ban a user from the EA forum, Facebook group, etc

Those groups can make their own decisions. EA has no central authority. I moderate a group like that and there is no chance I'd ban someone just because of the sort of thing which is going on here, and certainly not merely because the high chancellor of the effective altruists told me to.

I'm not especially impressed with the reference class of social movements when it comes to doing good, and I'm not sure we should do a particular thing just because it's what other social movements do.

We're not following their lead on how to change the world. We're following their lead on how to treat other members of the community. That's something which is universal to social movements.

keep seeing other communities implode due to divisive internet drama, and I'd rather this not happen to mine. I would at least like my community to find a new way to implode. I'd rather be an interesting case study for future generations than an uninteresting one.

Is this serious? EA is way more important than yet another obscure annal in Internet history.

So what's the right way to take action, if you and your friends think someone is a bad actor who's harming your movement?

Tell it to them. Talk about it to other people. Run my organizations the way I see fit.

undefined @ 2016-10-27T06:17 (+3)

There's the normal kind of drama which is discussed and moved past, and the weird kind of drama like Roko's Basilisk which only becomes notable through obsessive overattention and collective self-consciousness. You can choose which one you want to have.

I think the second kind of drama is more likely in the absence of a governing body. See the vigilante action paragraph in this comment of mine.

Is this serious? EA is way more important than yet another obscure annal in Internet history.

If the limiting factor for a movement like Effective Altruism is being able to coordinate people via the Internet, then coordinating people via the Internet ought to be a problem of EA interest.

I see your objections to my proposal as being fundamentally aesthetic. You don't like the idea of central authority, but not because of some particular reason why it would lead to bad consequences--it just doesn't appeal to you intuitively. Does that sound accurate?

undefined @ 2016-10-26T15:12 (+3)

Tell it to them. Talk about it to other people. Run my organizations the way I see fit.

That's what we did for a year+. The problem didn't go away.

undefined @ 2016-10-26T16:27 (+1)

Not much of a problem except the time you wasted going after it. Few people in the outside world knew about InIn; fewer still could have associated it with effective altruism. Even the people on Reddit who dug into his past and harassed him on his fake accounts thought he was just a self-promoting fraud and appeared to pick up nothing about altruism or charity.

I'm done arguing about this, but if you still want an ex post facto solution just to ward off imagined future Glebs, take a moment to go to people in the actual outside world, i.e. people who have experience with social movements outside of this circlejerk, and ask them "hey, I'm a member of a social movement based on charity and altruism. We had someone who associated with our community and did some shady things. So we'd like to create an official review board where Trusted Community Moderators can investigate the actions of people who take part in our community, and then decide whether or not to officially excommunicate them. Could you be so kind as to tell us if this is the awful idea that it sounds like? Thanks."

undefined @ 2016-10-27T06:35 (+5)

we'd like to create an official review board where Trusted Community Moderators can investigate the actions of people who take part in our community, and then decide whether or not to officially excommunicate them.

So here's your proposal for dealing with bad actors in a different comment:

Tell it to them. Talk about it to other people. Run my organizations the way I see fit.

You've found ways to characterize other proposals negatively without explaining how they would concretely lead to bad consequences. I'll note that I can do the same for this proposal--talking to them directly is "rude" and "confrontational", while talking about it to other people is "gossip" if not "backstabbing".

Dealing with bad actors is necessarily going to involve some kind of hostile action, and it's easy to characterize almost any hostile action negatively.

I think the way to approach this topic is to figure out the best way of doing things, then find the framing that will allow us to spend as few weirdness points as possible. I doubt this will be hard, as I don't think this is very weird. I lived in a large student co-op with just a 3-digit number of people, and we had formal meetings with motions and elections and yes, formal expulsions. The Society for Creative Anachronism is about dressing up and pretending you're living in medieval times. Here's their organizational handbook with bylaws. Check out section X, subsection C, subsection 3 where "Expulsion from the SCA" is discussed:

a. Expulsion precludes the individual from attendance or participation in any way, shape or form in any SCA activity, event, practice, or official gathering for any reason, at any time. Expulsions are temporary until the Board imposes a Revocation of Membership and Denial of Participation (R&D). This includes a ban on participation on officially recognized SCA social media (Facebook) sites, officially recognized SCA electronic email lists, and officially recognized SCA webpages.

b. For more details see the SCA Sanction Guide.

CarlShulman @ 2016-10-26T19:19 (+3)

Even the people on Reddit who dug into his past and harassed him on his fake accounts thought he was just a self-promoting fraud and appeared to pick up nothing about altruism or charity.

Looking at the links you shared it looks like these accounts weren't so much 'fake' but just new accounts from Gleb that were used for broadcasting/spamming Gleb's book on Reddit. That attracted criticism for the aggressive self-promotion (both by sending to so many reddits, and the self-promotional spin in the message).

The commenters call out angela_theresa for creating a Reddit account just to promote the book. She references an Amazon review, and there is an Amazon review from the same time period by an Angela Hodge (not an InIn contractor). My judgment is that is a case of genuine appreciation of the book, perhaps encouraged by Gleb's requests for various actions to advance the book. In one of the reviews she mentions that she knows Gleb personally, but says she got a lot out of the book.

At least one other account was created to promote the book, but I haven't been able to determine whether it was an InIn affiliate. Gleb says he

didn't ask, I mean specifically that I did not in any way hint that they should do so or that doing so is a good idea 🙂 Again, I want to be clear that they might or might not have done so out of their own initiative

undefined @ 2016-10-26T20:37 (+4)

Ok my goal was not to launch accusations, I just wanted to point out that even when people were saying this (they thought they were fake accounts) and looking into his personal info they didn't say anything about altruism or charity, so the themes behind the content weren't apparent, meaning that there was little or no damage to EA. Because most of the content on the site and book isn't about charity or altruism, it's not clear how well this promotes people to actually donate and stuff, but it can't be very harmful.

CarlShulman @ 2016-10-26T20:58 (+3)

Right, I just wanted to diminish uncertainty about the topic and reduce speculation, since it had not been previously mentioned.

undefined @ 2016-10-25T08:23 (+3)

Kbog, I think your general mistake on this thread as a whole is assuming a binary between "either we act charitably to people or we ostracise people whenever members of the community feel like outgrouping them". Thus your straw-man characterisation of an

exclusionary, witch hunt, no-due-diligence point of view which some people are advocating in the comments here

Which was exactly what I disavowed at the bottom of my long comment here.

Examples of why your dichotomy is false: we could have very explicit and contained rules, such as "If you do X, Y or Z then you're out" and this would be different from the generic approach of "if anyone tries to outgrip them then support that effort". Or if we feel that it is too hard to put into a clear list, perhaps we could outsource our decision-making to a small group of trusted 'community moderators' who were asked to make decisions about this sort of thing. In an case, these are two I just came up with, the landscape is more nuanced than you'r accounting for.

undefined @ 2016-10-25T13:32 (+2)

To be more clear, I'm against both (a) witch hunts and (b) formal procedures of evicting people. The fact that one of these things can happen without the other does not eliminate the fact that both of them are still stupid on their own.

we could have very explicit and contained rules, such as "If you do X, Y or Z then you're out" and this would be different from the generic approach of "if anyone tries to outgrip them then support that effort".

As a counterexample to the dichotomy, sure. As something to be implemented... haha no. The more rules you make up the more argument there will be over what does or doesn't fall under those rules, what to do with bad actions outside the rules, etc.

Or if we feel that it is too hard to put into a clear list, perhaps we could outsource our decision-making to a small group of trusted 'community moderators'

Maybe you shouldn't outsource my decision about who is kosher to "trusted community moderators". Why are people not smart enough to figure it out on their own?

And is this supposed to save time, the hundreds of hours that people are bemoaning here? A formal group with formal procedures processing random complaints and documenting them every week takes up at least as much time.

undefined @ 2016-10-25T17:51 (+9)

The system of everyone keeping track of everything works ok in small communities, but we're so far above Dunbar's number that I don't think it's viable anymore for us. As you point out, a more formal process wouldn't have time for "processing random complaints and documenting them every week", so they'd need a process for screening out everything but the most serious problems.

undefined @ 2016-10-25T19:04 (+2)

The system of everyone keeping track of everything works ok in small communities, but we're so far above Dunbar's number that I don't think it's viable anymore for us.

Everyone doesn't have to keep track of everything. Everyone just needs to do what they can with their contacts and resources. Political parties are vastly larger than Dunbar's Number and they (usually) don't have formal committees designed to purge them of unwanted people. Same goes for just about every social movement that I can think of. Except for churches excommunicating people, of course.

This is the only time that there's been a problem like this where people started calling for a formal process. You have no idea if it actually represents a frequent phenomenon.

so they'd need a process for screening out everything but the most serious problems.

Make bureaucracy more efficient by adding more bureaucracy...

undefined @ 2016-10-27T06:25 (+3)

Political parties are vastly larger than Dunbar's Number and they (usually) don't have formal committees designed to purge them of unwanted people.

The Democrats have the Democratic National Committee, and the Republicans have the Republican National Committee.

undefined @ 2016-10-27T18:33 (+1)

Do they kick people out of the party?

More specifically, do they kick people out of 'conservatism' and 'liberalism'?

undefined @ 2016-10-27T18:44 (+3)

In the US, and elsewhere, they use incentives to keep people in line, such as withholding endorsements or party funds, which can lead to people losing their seat, this effectively kicking them out of the party. See whips) for what this looks like in practice. Also, in parliamentary systems, often times you can also kick people out of the party directly, or at the very least take away their power and position.

undefined @ 2016-10-27T18:49 (+1)

Yes, if you're in charge of an organization or resources, you can allocate them and withhold them how you wish. Nothing I said is against that.

In parties and parliaments you can remove people from power. You can't remove people from associating with your movement.

The question here is whether a social movement and philosophy can have a bunch of representatives whose job it is to tell other people's organizations and other people's communities to exclude certain people.

undefined @ 2016-10-27T19:24 (+1)

In parties and parliaments you can remove people from power. You can't remove people from associating with your movement.

Your party leadership can publicly denounce a person and disinvite them from your party's convention. That amounts to about the same thing.

The question here is whether a social movement and philosophy can have a bunch of representatives whose job it is to tell other people's organizations and other people's communities to exclude certain people.

Quoting myself:

I don't (currently) think it would be a good idea for an official body to make this kind of request. Actually, I think an official committee would be a good idea even if it technically had no authority at all. Just formalizing a role for respected EAs whose job it is to look in to these things seems to me like it could go a long way.

undefined @ 2016-10-27T18:58 (+1)

Good question - not really sure, I just meant to directly answer that one question. That being said, Social movements have, to varying degrees of success, managed to distance evenhanded from fringe subsets and problematic actors. How, exactly, one goes about doing this is unknown to me, but I'm sure that it's something that we could (and should) learn from leaders of other movements. Of the top of my head, the example that is most similar to our situation is the expulsion of Ralph Nader from the various movements and groups he was a part of after the Bush election.

undefined @ 2016-10-25T18:16 (+4)

Maybe you shouldn't outsource my decision about who is kosher to "trusted community moderators". Why are people not smart enough to figure it out on their own?

The issue in this case is not that he's in the EA community, but that he's trying to act as the EA community's representative to people outside the community who are not well placed to make that judgment themselves.

undefined @ 2016-10-24T15:16 (+11)

Here are some details on how this post came together: jefftk.com/p/details-behind-the-inin-document

undefined @ 2016-10-24T12:55 (+10)

Thank you - this represents a very conscientious follow-up to serious concerns and a very complicated discussion. I appreciate the presentation of considered evidence and the opportunity given for a) members of the community pool their concerns and b) InIn to give their response.

undefined @ 2016-10-24T17:11 (+9)

Gleb, Intentional Insights board meeting, 9/21/16 at 22:05:

"We certainly are an EA meta-charity. We promote effective giving, broadly. We will just do less activities that will try to influence the EA movement itself. This would include things like writing articles for the EA forum about how to do more effective marketing. We will still do some of that, but to a lesser extent because people are right now triggered about Intentional Insights. There's a personalization of hostility associated with Intentional Insights, so we want to decrease some of our visibility in central EA forums, while still doing effective altruism. We are still an effective altruist meta-charity. So focusing more on promoting effective giving to a broad audience."

(https://www.youtube.com/watch?v=WbBqQzM7Rto)

CarlShulman @ 2016-10-24T23:42 (+12)

See 53:10-57:30 for discussion of social media.

A questioner asks about the concerns raised about InIn's social media presence. Tsipursky gives the raw numbers for social media including Facebook, Twitter, and Pinterest. He admits to the presence of clickfarms in facebook likes (although not the massive scale), but denies problems for Twitter and Pinterest while presenting them as good news about social media impact.

He conveys this by saying that the precise mechanism in Facebook is not known to apply to the other channels, failing to mention the evidence regarding them. There is even an exchange with Agnes Vishnekvin about how great it is to have so many Pinterest followers, since there are more women on Pinterest.

This meeting took place Sept 21st, but Tsipursky had been informed about the Twitter and Pinterest problems (lack of engagement, InIn following thousands of people, etc) discussed in the doc in August. He only addressed the Facebook problem mentioned by the questioner, while sweeping problems with the other channels under the rug and strongly implying they were fine.

23:50-25:40 A questioner asks about the controversy with InIn and the EA movement. It is said a few existing and potential donors/pledges withdrew from supporting InIn after the controversy. Also Tsipursky and Vishnevkin say that 2 or 3 people at EA Global had considered 4-figure donations to InIn, and these may have fallen through in light of the subsequent revelations and discussion.

undefined @ 2016-10-25T13:00 (+8)

Gleb's problems seem due to important differences in social status instincts. For example, Eliezer once wrote that he doesn't experience the "status smackdown emotions" that other people experience, but he didn't realize it until a lot of people complained that his Harry Potter character comes across as insufferably arrogant to them. Readers wanted to smack down his Harry Potter character but this possibility did not occur to Eliezer at the time. So, Eliezer could not have written a Harry Potter character that people did not want to smack down.

I suspect that, for similar reasons, Gleb did not expect to see a large number of complaints of this nature. He might be having difficulty modeling other people's minds regarding status, so he might find it difficult to relate to the people who have complained.

Some with social status instinct differences might be described as "status blind". They might not notice status messages at all, they might not make clear distinctions between different statuses, or they make such detailed distinctions that it becomes impossible to organize the statuses into a hierarchy. This very detailed approach has effects that are totally unlike social status as most people seem to experience it.

Additionally, someone who is status blind might have a very blurry emotional experience of statuses or they might feel nothing at all. That is to say status may not feel important to someone who is status blind. Richard Feynman wrote that he "Never knows who he is talking to." and this resulted in him starting arguments with geniuses and famous people. Fortunately for Feynman, he was bright enough that he was able to hold his own and maybe it didn't seem too out of place to others for him to behave that way. I don't know if this example from Feynman is some form of status blindness, but I hope it makes it easier to imagine what status blindness might feel like for someone. For some, I think status blindness feels like always being of equal status no matter who you're talking to.

On many occasions, I have noticed that Gleb didn't seem to mind public feedback. This is very unusual. That can certainly be a strength, but is part of a double-edged reputation sword. Most people who want feedback get an anonymous form so they can receive it in private. This prevents other people from reading things that make them look bad. Things like this cause me to suspect that, for Gleb, status messages do not have an emotional impact.

For the same reasons, when Gleb makes a status claim, he may not realize it will feel very important to others.

If I am correct that Gleb has a very different experience of social status, this would make promotion very hard for Gleb. It could lead to an outward appearance a little bit similar to Eliezer's "Arrogance Problem" as described by Luke Muehlhauser. When chatting, Gleb doesn't come across as an arrogant person, but some of his promotional materials do have an element of that. It's mainly when he is trying to promote InIn that I see things really standing out that seem due to differences in status instincts.

I'm sure that nobody here intends to shame Gleb for inherent differences that he may have and I'm sure nobody intends to behave like an ableist. It seems like what's going on with these group discussions is mainly due to inferential distance. People didn't understand Gleb and Gleb didn't really understand others because it's complicated and nobody had insight into what the difference is.

I hypothesize that what Gleb needs most is a few good, detailed explanations about how other people perceive statuses. He also needs to know what specifically he can do to "speak the language of status" to effectively communicate, given the way others are going to interpret him. This would help him communicate promotional messages in a way that a broad audience will find is both accurate and persuasive, despite the differences in social status experiences. I believe it is very important to Gleb to be able to present Intentional Insights accurately and effectively. To succeed at that that, I think Gleb needs to become much more aware of everything having to do with social statuses and how they are perceived by others.

Fortunately Gleb does take feedback. I think he will improve if he gets explanations that help him really understand the problem and what the solution looks like. I can't be sure what's going on inside of Gleb, of course. I'm not in his head, but I would like to suggest that we all try to be careful and make good distinctions between ignorance and malice.

undefined @ 2016-10-26T19:58 (+13)

I see a lot of examples of people investing a lot of energy giving Gleb feedback to no result. What do you think should be done differently that would lead to a different result?

I don't want to shame anyone for things they can't control, but if Gleb does not have the abilities that are necessary for outreach and fundraising, it is correct for him to not do outreach and fundraising. This is in some sense discrimination based on ability, but calling it "behaving like an ableist" seems like a really bad framing to me. First, it frames it as an issue of identity rather than individual actions. It would be more helpful to say "expecting Gleb to X unfairly discriminates on ability" than "Expecting X is behaving like an ableist"

Second, ableist is a vague word that includes both "judging moral worth based on ability", "discrimination based on lack of abilities that have nothing to do with the question at hand" and "different abilities lead to different outcomes". If Gleb doesn't have the abilities to succeed in his chosen field that is very sad. I mourn for the things I would like to do but lack the ability for. But that does not change the outcome of his actions.

undefined @ 2016-10-27T03:00 (+2)

You have a great point that I agree with: if a person is incompetent at a particular task, they should not be doing that particular task (or should learn first rather than making a mess). IMO, Gleb should not write his own promotional materials himself and should not be the decision maker regarding methods of promotion (or he should invest the time to learn to do it well first). However, in my view, what Gleb does at Intentional Insights is not merely promotion. That is just the most visible thing that Gleb does. What Gleb actually does at InIn includes a lot of uncommon and valuable abilities like:

Gleb has a really intense level of dedication to the cause of spreading rationality. Gleb is brave enough to stick his neck out and take a risk while most people are terrified just to speak in front of an audience (Though I believe someone else aught to write his speeches. Delegating speech writing is common anyway.). He is also taking large risks financially in order to make InIn happen, and not everyone can do that. Gleb cares a lot about helping the world and being kind to others and is very dedicated to that. He is educated and knowledgeable as a professor and as a rationalist, though I realize this doesn't show very well in the articles written by some of his writers. In his own articles, the quality is much higher. So, I believe his main quality problem is not that he doesn't understand quality but that his awkward promotion behaviors are repelling the good writers and/or attracting poor ones so that he is left trying to make the best of it. I've actually seen this repelling effect happening first hand. I believe that if he proved that Intentional Insights can do promotion well, good writers would want the benefit of being promoted by InIn.

Most importantly, Gleb actually wants the truth while some "rationalists" are motivated by other things (ego, status, loving to argue, wanting to hang out with smart people, etc.), so cannot actually practice rationality, nor do such people have any hope of ever spreading rationality. Spreading rationality is ridiculously hard and it's not something that most dedicated and reality-minded rationalists would do well right way. Someone like Gleb at least has a chance because his motives are in the right place. That is both mission critical for the cause of spreading rationality, and it's not common enough.

I think Gleb could pretty easily upgrade his leadership style to play to his strengths, and then learn enough about things like promotion to delegate what he is weak at effectively. All the successful leaders I've gotten to know are ignorant about a variety of things their organizations do, but delegate those things well. This works surprisingly well. I've seen delegation compensate for some truly hideous areas of incompetence, so I regard delegation as a very powerful strategy. I believe Gleb can learn to use delegation as a sort of reasonable accommodation for the issues that result from social status instinct differences.

Why hasn't Gleb seemed to update on this yet? He is an updater - I've seen it. Maybe you didn't know this, but Gleb has already begun delegating some of the promotional decisions.

I think what he needs to make delegation successful is a better understanding of promotion. Part of the problem may be that "the apple doesn't fall far from the tree", so some of the people that Gleb has attracted and chosen to delegate the promotional decisions to aren't much better at promotion than Gleb is.

The size of the inferential distance in this area is very large and it wasn't obvious to anyone how to explain across the distance before. I believe that what I wrote in the comment we're responding to is an insightful enough foundation of an explanation that Gleb, myself, and others can build upon it to help Gleb become informed enough to succeed at delegating promotional tasks to skilled people.

It's not our responsibility to educate him, of course, but I think there are enough people who are willing enough to do that, even though it takes time. I think Gleb is willing enough to spend the time learning. I think that this approach of crossing the inferential distance is worth testing to see whether it succeeds.

Additionally, I'm happy to document my own attempts at explaining to Gleb, and explaining Gleb to others, by placing these explanations here on the forum. Because I am documenting all of this, others in the EA movement with social status instinct differences will have an opportunity to find information which will assist them with self-improvement. Therefore, my efforts, so long as I document them here, are much more valuable than just helping Gleb.

Even if I test my belief that we can cross the distance with Gleb, and my attempt fails, that test result is still valuable information!

undefined @ 2016-10-25T23:46 (+8)

I think you're doing the thing shlevy described about being way too charitable to Gleb here. Outside view, the simplest hypothesis that explains essentially everything observed in the original post is that Gleb is an aggressive self-promoter who takes advantage of EA conversational norms to milk the EA community for money and attention.

It might be useful to reflect a little on what being manipulated feels like from the inside. An analogous dynamic in a relationship might be Alice trying very hard to understand why Bob sometimes behaves in ways that makes her uncomfortable, hypothesizing that maybe it's because Bob had a difficult childhood and finds it hard to get close to people... all the while ignoring that outside view, the simplest hypothesis that explains all of Bob's behavior is that he is manipulating her into giving him sex and affection. It's in some sense admirable for Alice to try to be charitable about Bob's behavior, but at some point 1) Alice is incentivizing terrible behavior on Bob's part and 2) the personal cost to Alice of putting up with Bob's shit is terrible and she shouldn't have to pay it.

undefined @ 2016-10-26T21:40 (+17)

I think Kathy's perspective is probably overly optimistic, and yours is probably overly pessimistic, Qiaochu. There are a lot of grey-area options in between being a scrupulously honest and responsive-to-criticism altruist who just has a poor model of status dynamics, and being an "aggressive self-promoter" who just want "money and attention". If I were forced to guess, I'd guess what's probably going on is some thought process like:

  1. "I'm convinced that EA outreach has massive potential upside if done well enough, and minimal downside even if done poorly."

  2. "I thinks I have a lot of good outreach skills and know-how, and while I'm not perfect, I'm sufficiently good at 'updating' and accepting criticism that I'm likely to improve a lot over time."

  3. "Therefore InIn's long-run value is huge no matter how many small hiccups there are at the moment."

  4. "The upside is so large and the need so great that some amount of dishonesty is justified for the greater good. Or, if not dishonesty: emphasizing the good over the bad; not always being fully forthcoming; etc. Not being too stringent about which exact means you use, as long as you aren't literally injuring anyone and as long as the ends are sufficiently good."

All of these claims are questionable in this case: the upside of EA outreach may depend a lot on who we're reaching out to and how; the downside may be substantial (e.g., at least some people have reported thinking EA was terrible because they thought InIn represented it); outreach and updating skills are both lacking; and playing fast and loose with the facts "for the greater good" is a terrible long-run heuristic to follow even if it really is sometimes a good idea from a myopic utility-maximizing perspective. The problem is compounded if not being fully forthcoming with others makes it progressively harder to see the whole truth oneself.

undefined @ 2016-10-27T04:48 (+4)

I agree with nearly all of this and I'm glad to see that you described these things so clearly! The behavior I keep observing in people with social status instinct differences actually matches the four thought patterns you described pretty well (written out below). My more specific explanation is that Gleb models minds differently when status is involved, so does not guess the same consequences that we do, and because he fails to see the consequences, he cannot total up the potential damage. So, he ends up underestimating the risk and makes different decisions from people who estimate the risk as being much higher. I explained why I chose this explanation from the others with Occam's razor (some of the others are in my written out response to your numbered thoughts), described what I think would solve this problem in a testable prediction and linked to the comment where my pessimism is located. I hope my solution idea, my supports for my beliefs and my pessimism link explain my view better because I think there is hope for the many people in our social network who have issues similar to what we're seeing with Gleb. This could be valuable, so I really would like to test it. :)

Occam's razor:

It possible that each of your four your points has a completely different cause from the others (I offered a few, Qiaochu offered a few). However, my explanation that Gleb underestimates reputation issues due to social status instinct differences makes fewer assumptions than that because it explains all four at once. (Explained in "My take on each of your 4 points" below.)

It's possible that Qiaochu_Yuan is correct that Gleb is an aggressive self-promoter, with an intent to take advantage of EA conversational norms, with a goal of milking the EA community for money and attention and that Gleb intends to be manipulative. Other information I have about Gleb does not match this. He sacrifices a lot of money and financial security for InIn, so if he were motivated by greed, that would be surprising. He is doing charity work, so seems less likely to have the motivations of a selfish jerk like the one Qiaochu describes. Gleb hates doing fund raising work, which supports my belief that he has a skill related problem more than it supports Qiaochu's belief that he wants to milk people for money.

Testable Prediction:

I find that Occam's razor helps me select explanations upon which I can build hypotheses that end up testing positive, so I'll present a hypothesis and turn it into a testable prediction.

If my hypothesis is correct, then Gleb would have the chance to succeed if he heard enough descriptions specifying how others go about modeling other people's minds when status is involved, what consequences they guess will happen if specific reputations are applied to InIn, and what quantity of negative/positive impact each specific reputation would result in. To turn it into a testable prediction: if Gleb received this information on every promotion-related idea he was seriously considering for the next three months, I think he'd learn enough to delegate successfully. The changes we'd see are that people would no longer complain about InIn and also that InIn would attract good people who were not interested in volunteering there before.

To prevent disaster during the 3 month period of time, perhaps InIn could take a break from most or all promotion type work, including publishing most/all articles.

My Pessimism Is Located Here:

I can see how I came across as overly optimistic in the comment Qiaochu_Yuan was replying to. My first comment on this post did a much better job of summarizing my overall take on the situation than that one. That one was only intended to explain a much more specific area of thoughts than my overall perspective. I gave Qiaochu a quick sample of my pessimism here:

http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8qt

My take on each of your 4 points:

1.) "I'm convinced that EA outreach has massive potential upside if done well enough, and minimal downside even if done poorly."

My take: People with different social status instincts can have a tendency to drastically underestimate the reputation damage that can be done if outreach is low quality. I think anyone who underestimates the downsides enough would be likely to end up thinking the way you describe in 1.

2.) "I thinks I have a lot of good outreach skills and know-how, and while I'm not perfect, I'm sufficiently good at 'updating' and accepting criticism that I'm likely to improve a lot over time."

My take: If Gleb believes he is good enough at outreach for now, then this could be Dunning-Krueger effect, anosognosia, or underestimating the negative impact his imperfections are having. Any of these three reasons would be likely to cause a person to think their skill level is sufficient for now and/or easy enough to improve, when it is not.

3.) "Therefore InIn's long-run value is huge no matter how many small hiccups there are at the moment."

My take: I believe InIn's long-run value will be small or negative if the impacts of reputation risks continue to be underestimated. I think it is unfortunately far too likely that InIn will only end up producing important problems. These may include causing people to feel averse to rationality, confusing people about effective altruism, or drawing the wrong people into the EA movement. The risk of counter-productive results has been far too high for me to offer InIn anything other than things which could help reduce the risk of such problems (like feedback). However, the reason I think InIn's long-run value is likely to be low or negative is because I am not underestimating the impact of InIn's reputation problems the way Gleb is. You and I may be having something like hindsight bias or illusion of transparency here. I think anyone who has a pattern of underestimating reputation problems would be pretty likely to end up believing 3.

4.) "The upside is so large and the need so great that some amount of dishonesty is justified for the greater good. Or, if not dishonesty: emphasizing the good over the bad; not always being fully forthcoming; etc. Not being too stringent about which exact means you use, as long as you aren't literally injuring anyone and as long as the ends are sufficiently good."

My take: I suspect that you probably do not expect Gleb to be deontological about this or use virtue ethics or anything. Instead, I suspect that you would probably require him to meet a much higher standard with his trade-off decisions. To you and I, the negative reputation impact of the behavior you describe in 4 seems large. My reaction to this is to automatically model other people's minds, guess some consequences for this dishonest behavior, and feel disgust. One guess is that people may feel suspicion toward Intentional Insights and regard their rationality teachings with skepticism. That alone could toast all of the value of the organization. Therefore, it is a major reputation disaster which would need to be rectified in a satisfactory manner before we can believe InIn will have a positive impact. Probably, we need to overcome mind projection fallacy to see why Gleb would think this way. My model of Gleb says the problem is that he models other people differently from the way I do when status is involved, does not guess the same consequences of reputation problems, and this is how he ends up underestimating the impact of reputation disasters. Underestimating the negative impact of dishonesty would, of course, result in Gleb choosing different risk vs. reward trade-offs than we would.

undefined @ 2016-10-27T03:23 (+5)

I am actually in favor of a shape up or ship out policy with stuff like this. I replied to Gregory_Lewis with: "I strongly relate to your concerns about the damage that could be done if InIn does not improve. I have severely limited my own involvement with InIn because of the same things you describe. My largest time contribution by far has been in giving InIn feedback about reputation problems and general quality. A while back, I felt demoralized with the problems, myself, and decided to focus more on other things instead. That Gleb is getting so much attention for these problems right now has potential to be constructive." ... "Perhaps I didn't get the memo, but I don't think we've tried organizing in order to demand specific constructive actions first before talking about shutting down Intentional Insights and/or driving Gleb out of the EA movement."

(Perhaps you didn't read all of my comments because this thread has too many to read but that one is located here: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8o8)

One of the main reasons I have hope is because I've given this specific class of problem, social status instinct differences, a lot of thought. I have seen people improve. I think I am able to explain enough to Gleb to help get him on the right track. I have decided to give it a shot. We'll see if it works.

undefined @ 2016-10-24T16:51 (+5)

EDIT: Comment here was about a video by InIn, where I incorrectly speculated that they might've misused trademarks to signal affiliation with several other EA orgs. At least one of those orgs has confirmed that they did review the video prior to publication, so in fact there was not an issue. I apologize; it was wrong to speculate about that when it wasn't true, and without adequately investigating first.

undefined @ 2016-10-24T17:02 (+7)

I'm going to guess that none of GW, TLYCS, ACE, or GWWC worked with InIn on this video, saw it before it was published, or consented to the use of their trademarks in it.

The video description does say "All the organizations involved in the video reviewed the script and provided a high-resolution copy of their logo. Their collaboration in the production of this video does not imply their specific support for any other organizations involved in the video."

undefined @ 2016-10-24T17:13 (+8)

You're right, I missed that. I'll edit the parent post to fix the error.

(Given the history, I'm curious to find out what "reviewed the script and provided a high-resolution copy of their logo" means, and in particular whether they saw the entire script, and therefore knew they were being featured next to InIn, or whether they only reviewed the portion that was about themselves.)

NAS @ 2016-10-24T18:13 (+11)

Thanks for this. I volunteer for The Life You Can Save and I am checking in on this for the organization. I will get back to you shortly.

NAS @ 2016-10-25T21:57 (+12)

An update from The Life You Can Save: we saw and approved this particular video for publication. We did not check with other non-profits as we assumed that was not our responsibility.

Hope that helps.

CarlShulman @ 2016-10-25T19:14 (+6)

Jim, in light of the statement in the video description I think you should edit this post more to reduce snark based on a questionable hypothesis (and put the edits on top). I think this is also a good example of the value of careful and cautious approach to these things.

Also, while the GiveWell pronunciation is not that usually used by GiveWell staff, pronouncing the words separately actually makes it easier to understand.

undefined @ 2016-10-25T19:59 (+3)

If the organizations concerned give permission, I am happy to share documentary evidence in my email of them reviewing the script and giving access to their high-quality logo images. I am also happy to share evidence of me running the final video by them and giving them an opportunity to comment on the wording of the description below the video, which some did to help optimize the description to suit their preferences. I would need permission from the orgs before sharing such email evidence, of course.

CarlShulman @ 2016-10-25T20:57 (+4)

I am confident this is true.

And at least some of the orgs have been contacted (see Neela's comment) and have the opportunity to disclaim if they wish. [ETA: and have said this was true in their own case, see Neela's second comment.]

undefined @ 2016-10-27T13:52 (+4)

I'm half wondering how much upset was influenced by a general suspicion of or aversion about advertising and persuasion in general.

From one perspective, it's almost as if Gleb used to be one of the "advertising/persuasion is icky" people, and decided to bite the bullet and just do this thing, even if it seemed whacked out and icky...

At first I thought maybe part of the problem was Gleb didn't have any vision of how it could be done better. Now, I think it might actually be part of a systemic problem I keep noticing. Our social network generally does not have a clear vision of how it could be done better.

How many of us can easily think of specific strategies to promote InIn that sit well with all of your ethical standards and effectiveness criteria?

If a lot of people here are beginning with the belief that promotion is either icky or ineffective, we have set ourselves up for failure. This may encourage us to behave as if one either needs to accept being ineffective, or one needs to allow ones self to be icky ... which may result in choosing whichever things appear to be the icky-effective ones.

I think effective altruism can have both ethics and effectiveness at the same time. I do not believe there is actually a trade-off where choosing one necessarily must sacrifice the other. I believe there are probably even ways where one can enhance and build on the other.

I keep thinking that it would really benefit the whole movement if more people became more aware about what sorts of things result in disasters and how to promote things well. This is another way that such awareness could be beneficial.

undefined @ 2016-10-27T14:38 (+2)

Huh, this is a good point. Having a clear sense of what to do with advertising (both within the community and without) would be really helpful.

undefined @ 2016-10-27T08:33 (+4)

In 5.3. Twitter:

The question asked of Gleb is "How many of those are payed [sic] and how many organic?"

I double checked and some Internet sources define the term "organic" as "unpaid". Following other accounts that will, in turn, follow your account is not the same thing as giving people money to follow you. I understand that this question was intended to inquire about how many Twitter followers actually genuinely want to follow the Intentional Insights account. This is a perfectly valid question.

What I'm saying is that the 5.3 Twitter section can be misinterpreted. People might think it means "Gleb was asked how many real followers he had and he mislead the person." when what really happened looks to me like Gleb was asked how many of his followers he paid money to in exchange for their follow.

If the 5.3 section used different wording / presentation, I think it would depict the situation more accurately.

I appreciate the huge amount of work it must have taken to put this post together. Nothing is perfect, and it's hard to edit out every single flaw in something this long.

undefined @ 2016-10-28T06:18 (+3)

My stance is currently that Gleb most likely has a learning disorder (perhaps he is on the spectrum) and is also ignorant about marketing, resulting in a low skill level with promotion. Some people here are claiming things that make it seem like they believe Gleb intends to do something bad, like a con. It's also possible Gleb was following marketing instructions to the letter which were written by people who are less scrupulous than most EAs (perhaps because he thought it was necessary to follow such instructions to be effective). I wouldn't be surprised if Gleb perceived what he was doing as "white lies" (thinking that there would be a strong net positive impact). It's also possible that some of these were ordinary mistakes (though probably not all of them because there are a lot).

I'd like to discover why people believe things like "this is a con" and see whether I change my mind or not. Anyone up for that?

undefined @ 2016-10-28T16:34 (+13)

I don't care if it is intentionally a con or not. Given that cons exist, the EA community needs an immune system that will reject them. The immune system has to respond to behavior, not intentions, because behavior is all we can see, and because good intentions are not protection from the effects of behavior.

I no longer believe things Gleb says. In the Facebook thread he made numerous statements that turned out to be fundamentally misleading. Maybe he wasn't intentionally lying; I don't know, I'm not psychic. But the immune system needs to reject people when the things they say turn out to be consistently misleading and a certain number of attempts to correct fail.

I don't think everyone needs to draw the line in the same place, I approve of people helping others after some people have given up on them as a category, even if I think it's not going to work in this case. But before you invest, I encourage you to write out what would make you give up. It can't be "he admits he's a scam artist", because scam artists won't do that, and because that may not be the problem. What amount of work, lack of improvement from him, and negative effects from his work and interactions would convince you helping was no longer worth your time?

undefined @ 2016-10-29T11:25 (+2)

These are some really strong arguments, Elizabeth. This has a good chance to change my mind. I don't know whether I agree or disagree with you yet because I prefer to sleep on it when I might update about something important (certain processing tasks happen during sleep). I do know that you have made me think. It looks like the crux of a disagreement, if we have one, would be between one or both of the first two arguments vs. the third argument:

1.) EA needs a set of rules which cannot be gamed by con artists.

2.) EA needs a set of rules which prevent us from being seen as affiliated with con artists.

vs.

3.) Let's not ban people and organizations who have good intentions.

A possible compromise between people on different sides would be:

Previously, there had been no rule about this. (Correct me if I'm wrong about this!) Therefore, we cannot say InIn had broken any rule. Let's make a rule to limit dishonesty and misleading mistakes to a certain number in a certain time period / number of promotional pieces / volunteers / whatever. *

If InIn breaks the new rule after it is made, then we'll both agree they should be banned.

If you think they should be banned right now, whether there was an existing rule or not, please tell me why.

/* Specifying a time period or whatever would prevent discrimination against the oldest, most prolific, or largest organizations simply because they made a greater total number of mistakes due to having a greater volume of output.

The ratio between mistakes and output seems really important to me. Thirty mistakes in ninety articles is really egregious because that's a third. Three mistakes in three hundred articles is only 1%, which is about as close to perfection as one can expect humans to get.

Comparing 1 / 3 vs. 1 / 100 is comparing apples to oranges.

I'm not sure what the best limit is, but I hope you can see why think this is an important factor. Maybe this was obvious to everyone who may read this comment. If so, I apologize for over-explaining!

undefined @ 2016-10-29T18:04 (+6)

I have bunch of different unorganized thoughts on this.

One, absolute number is obviously the incorrect thing to use. Ratio is an improvement, but I feel loses a lot of information. "Better wrong than vague" is a valuable community norm, and how people respond to criticism and new information is more important than whether they were initially correct. It also matters how public and formal the statement was- an article published in a mainstream publication is different than spitballing on tumblr.

I'm unsure what you mean by "ban". There is no governing body or defined EA group. There are people clustering around particular things. I think banning him from the FB group should be based on the expected quality of his contribution to the FB group, incorporating information from his writing elsewhere. Whether people give him money should depend on their judgement about how well the money will be used. Whether he attends or speaks at EAG should be based on his expected contribution. None of these are independent, but they can have different answers.

I don't think any hard and fast rule would work, even if there was a body to choose and enforce it, because anything can be gamed.

What I want is for people to feel free to make mistakes, and other people to feel free to express concerns, and for proportionate responses to occur if the concerns aren't addressed. I think immune system is exactly the right metaphor. If a foreign particle enters your body, a lot of different immune molecules inspect it. Most will pass it by. Maybe one or two notice a concern. They attack to it and alert other immune molecules they should maybe be concerned. This may go nowhere, or it may cause a cascading reaction targeting the specific foreign particle. If a lot of foreign particles show up you may get an organ wide reaction (runny nose) or whole body (fever). The system coordinates without a coordinator.

Every time an individual talked to Gleb privately (which I'm told happened a lot), that was the first bout of the immune system. Then people complained publicly about specific things in specific posts here, on lesswrong, or on FB, that was the next step. I view the massive facebook thread and public letter as system wide responses necessary only because he did not adjust his behavior after the smaller steps. (Yes, he said he would, and yes, small things changed in the moment, but he kept making the same mistakes). Even now, I don't think you should be "banned" from helping him, if you're making an informed choice. You're an individual and you get to decide where your energy goes.

I do want to see changes in our immune system going forward. There is something of a halo effect around the big organizations, and I would like to see them criticized more often, and be more responsive to that criticism. Ben Hoffman's series on GiveWell is exactly the kind of thing we need more of. I'd also like to see us be less rigorous in evaluating very new organizations, because it discourages people from trying new things. I've been guilty of this- I was pretty hard on Charity Science originally, and I still don't think their fundraising was particularly effective, but they grew into Charity Entrepreneurship, which looks incredible.

I don't think the consequences of Gleb's actions should wait until there is a formal rule and he has had sufficient time to shoot himself in the foot, for a lot of reasons. One, I don't think a formal rule and enforcement is possible. Two, I think the information he has been receiving for over a year should have been sufficient to produce acceptable behavior, so the chances he actually improves are quite small. Three, I think he is doing harm now, and I want to reduce that as quickly as possible.

I realize the lack of hard and fast rule is harder for some people than for others, e.g. people on the autism spectrum. That's sad and unfair and I wish it weren't true. But as a community we're objectively very welcoming to people on the spectrum, far more so than most, and in this particular case I think the costs of being more accommodating would outweigh the benefits.

undefined @ 2016-10-30T11:04 (+3)

I'm unsure what you mean by "ban". There is no governing body or defined EA group.

There isn't currently one, but Will is proposing setting up a panel: Setting Community Norms and Values: A response to the InIn Open Letter.

The panel wouldn't have any direct power, but it would "assess potential egregious violations of those principles, and make recommendations to the community on the basis of that assessment."

undefined @ 2016-10-30T02:52 (+2)

I'm glad we agree that the absolute number of mistakes is obviously an incorrect thing to use. :) I like your addition of "better wrong than vague", (though I am not sure exactly how you would go about implementing it as part of an assessment beyond "If they're always vague, be suspicious." which doesn't seem actionable.).

Considering how people respond to criticism is important for at least two reasons. If you can communicate to the person, and they can change, this is far less frustrating and far less risky. A person you cannot figure out how to communicate with, or who does not know how to change the particular flaw, will not be able to reduce frustration or risk fast enough. People are going to lose their patience or total up the cost-benefit ratio and decide that it's too likely to be a net negative. This is totally understandable and totally reasonable.

I think the reason we don't seem to have the exact same thoughts on that is because of my main goal in life, understanding how people work. This has included tasks like challenging myself to figure out how to communicate with people when that is very hard, and challenging myself to figure out how to change things about myself even when that is very hard. By practicing on challenging communication tasks, and learning more about how human minds may work through my self-experiments, I have improved both my ability to communicate and also my ability to understand the nature of conflicts between people and other people-related problems.

I think a lot of people reading these comments do feel bad for Gleb or do acknowledge that some potential will be lost if EA rejects InIn despite the high risk that their reputation problems may result in a net negative impact.

Perhaps the real crux of our apparent disagreement is something more like differing levels of determination / ability to communicate about problems and persuade people like Gleb to make all the specific necessary changes.

The way some appear to be seeing this is: "The community is fed up with InIn. Therefore, let's take the opportunity to oust them.".

The way I appear to be seeing this is: "The community is fed up with InIn. Therefore, let's take the opportunity to persuade InIn to believe they need to do enough 2-way communication to understand how others think about reputation and promotion.".

Part of this is because I think Gleb's ignorance about reputation and marketing are so deep that he didn't see a need to spend a significant amount of time learning about these. Perhaps he is/was unaware of how much there is for him to learn. If someone could just convince him that there is a lot he needs to learn, he would be likely to make decisions comparable to: taking a break from promotion while he learns, granting someone knowledgeable veto power over all promotion efforts that aren't good enough, or hiring an expert and following all their advice.

(You presented a lot more worthwhile thoughts in your comment and I wish I could reply intelligibly to them all, but unfortunately, I don't have the time to do all of these thoughts justice right now.)

undefined @ 2016-10-26T20:21 (+2)

Just a thought on the big picture: EAs have tended to be more comfortable with EAs doing things that many would consider unethical (like being a lawyer or banker) as long as those people use their money or influence for the greater good. But here it appears that EAs want to hold other EAs to higher ethical standards than society does. I understand that this is not a great analogy because an EA organization (especially an outreach one) gets more scrutiny. Still, I think that marketing to a broad audience almost implies a certain amount of exaggeration in order to be competitive. And even though that makes many EAs (myself included) uncomfortable, might it be for the greater good?

CarlShulman @ 2016-10-26T20:56 (+13)
  • My sense is that honest and accurate evaluation of opportunities to do good, and high standards that enable that, has been a core value of EA
  • I disagree that exaggeration is more effective in broad outreach, e.g. GiveWell's reputation for honesty and care was central to letting it reach its current large scale (and its astroturfing scandal hurt badly because of that)
  • Accurate communication tends to work better for things that actually are better, and thus has good incentive properties as a standard
  • In any case, the focus in the document is mostly on InIn's interactions with the EA community rather than the general public, and it was precipitated by InIn's self-promotion and fundraising directed at the EA community
  • Thinking people are sometimes mistaken about how they assess different impacts of a job (e.g. most jobs result in increased carbon emissions, pay for the employee, consumer surplus) is not the same as lower ethical standards
undefined @ 2016-10-26T23:06 (+1)

Fair enough - just thought I would ask.

undefined @ 2016-10-25T02:45 (+1)

Note – I will make separate responses as my original comment was too long for the system to handle. This is part two of my comments.

Some of you will be tempted to just downvote this comment because I wrote it. I want you to think about whether that’s the best thing to do for the sake of transparency. If this post gets significant downvotes and is invisible, I’ll be happy to post it as a separate EA Forum post. If that’s what you want, please go ahead and downvote.

I disagree with other aspects of the post.

1) For instance, the points about affiliation, of which there were 2 substantial ones, about GWWC and ACE (I noted earlier it was a mistake to post about the conversation with Kerry).

A) After Michelle Hutchinson sent the email, we changed the wording to be very clear regarding what we mean, stating that we engaged in "collaboration with Against Malaria Foundation, GiveDirectly, The Life You Can Save, GiveWell, Animal Charity Evaluators, Giving What We Can, and others about them providing us with numbers of clicks and donations that they can trace to our article." see link

In other words, to prevent any semantic and philosophical discussion about the meaning of the term “collaboration,” we gave a very specific and clear statement about the nature of the collaboration at hand to prevent folks from getting confused about what it means. I am very comfortable standing by this statement.

B) Leah’s words were not in any way indicative of a formal endorsement for InIn, nor did we claim they were. They were just a statement of the kind of positive impact that InIn had for ACE. And in fact, we did ask Leah about quoting her in our internal documentation, which is where this information is located, our internal document about our EA impact: see image

2) The claims about astroturfing are way out of line: by comparing it to what GiveWell did, the authors are creating a harsh horns effect – smearing by association, in other words. For context, GiveWell’s senior staff on their paid time as employees went to forums where donations were discussed, and made up fake names to pose as forum members singing the praises of GiveWell. I and many other folks were very disappointed upon finding out what GiveWell did, although I appreciate the way they handled it. I would never want to do anything of the sort.

So let’s compare it to InIn. What the authors of this document point to is instances of InIn volunteers and volunteer/contractors on their own, non-paid time, and without any direction from the leadership, and using their real names, engaging with InIn content and posting mostly supportive messages, although with some criticism as well. They did not at all try to hide their identity, nor did they do so on paid time, as did GiveWell employees. We pay people only for specific things, such as doing video editing or social media management, and our miniscule budget does not cover low-impact and unethical activities such as the kind of thing done by GiveWell employees employees in the past.

I do not control what our volunteers or volunteer-contractors do on their non-paid time. I don’t have time to monitor all that our volunteers do, and I generally leave it up to them to figure out, as I have an attitude of trust and faith in them. Volunteer management is a delicate balance, as anyone who actually managed volunteers knows. So I only intervene when I hear about problems, otherwise I focus on more high-impact activities such as actually doing the work of outreach to a broad audience that makes a difference in improving the world. When folks engaged in things that got pushback, such as posting on Less Wrong without sharing in their introduction statements their role with InIn, I asked them politely to revise their introduction statements in one-on-one conversations.

Now, since this blowup, I have had a thorough conversation with the Board of Directors and our Advisory Board, and we decided to institute a more formal Conflict of Interest policy. We decided it would be appropriate to have a systematic policy that applies to anyone with an official position in the organization, meaning holding an office or being paid. Hopefully this will help guide people’s behavior in a way that results in appropriate disclosures. However, we anticipate it will take some time to shift behavior and not everything will go right. You are welcome to point out to me any instances where there’s an issues, and I’ll talk to the person who engaged in problematic behavior.

3) I’m not sure why the volunteer/contractor is listed as a dubious practice. All people who are contractors started off by being volunteers. Over time, as we had a need for more work being done, we approached some volunteers who we knew already had a background as contractors on Odesk to do some part-time work for the organization. You can see the screenshots with my description for more details.

It is very common for nonprofit organizations to offer people who volunteer for them to do some part-time work. This is how many other EA organizations besides InIn got started – with volunteers who then went on to do some part-time work. Eventually, these organizations became large enough to have full-time employees, and we’d like InIn to be there eventually.

Some folks expressed disbelief that the volunteer/contractors are really there because they support the mission and instead believe that they are just there for the money. Well, that’s simply not the case. Let’s take the example of Ella, who in October 2015, in response to a fundraising email, made a $10/month donation: see image She voluntarily, out of her own desire, chose to make this donation. Let me repeat – she voluntarily, out of her own volition in response to a fundraising call that went out to all of our supporters, chose to make this donation. Just to be clear, we send out fundraising letters regularly, so it’s not like this was some special occasion. She did not have to do it, it’s just something she wanted to do out of her own volition.

Nor is this in any way an explicit or implicit obligation for contractors – about half of the contractors are also donors, and the others are not. I value, respect, and treasure Ella and all the others contractors, they are a great team and I feel close to them. We have a family environment in the organization, and care about and support each othyer. It makes me very upset and frustrated to see the relationship between us described in this twisted way as a “dubious practice.”

4) The claims about it being bad to call oneself a best-selling author if one did not do it through making the New York Times best-seller list are silly. There are many best-seller lists, and authors who make it to the top of any list describe themselves as best-selling authors: see link The document makes it seem like I'm not following standard author practice here, and that's simply false.

5) The claims about not disclosing paid support are not backed up by any real evidence. I said that I ran the t-shirts by multiple people. Sure, some of them were volunteer/contractors for InIn. Does that fact cause them to not count as "people?" Wouldn't they be more likely to want higher-quality products so the organization succeeds more? In fact, they gave some of the more stringent criticism of the initial design, because they are more invested in the success of the design.

6) Regarding the Huffington Post piece, the person – Jeff Boxell – did not hear of effective giving before. Now he did, and he intends to use GiveWell and TLYCS as the guide for his donations. I am very comfortable standing by that claim.

P. S. Based on past experience, I learned that back and forth online about this will not be productive, so I did not plan to engage with, and if someone wants to learn more about my perspective, they are welcome to contact me privately by my email.