ElliotJDavies's Quick takes
By ElliotJDavies @ 2023-10-19T22:18 (+6)
nullElliotJDavies @ 2023-10-19T22:18 (+22)
80K podcast discussion threads.
Feature request: discussion threads on episodes of the 80k podcasts.
I consistently generate a lot of thoughts listening to the 80k pocast. I could imagine leaving these as comments, reading others thoughts, and identifying cruxes. To further steelman the need of podcast threads: the 80k podcast is the most widely consumed, indepth media produced in the EA sphere at the moment.
Lizka @ 2023-10-20T09:49 (+6)
Episode highlights tend to be shared under this account, which might be a good place to leave thoughts.
Just FYI: feature suggestions for the Forum could also go on the Forum feature suggestion thread. (I'm not sure if this is meant to be a feature request for the Forum or for 80,000 Hours, though! If it's the latter, you might want to get in touch more directly.)
ElliotJDavies @ 2023-10-23T17:50 (+4)
Episode highlights tend to be shared under this account, which might be a good place to leave thoughts.
That's totally it, thanks for flagging, I have not seen these previously (not sure why).
BrownHairedEevee @ 2023-10-21T02:06 (+4)
Anyone can create a linkpost for an 80k episode. Though it might be extra convenient to have a way to automatically create a linkpost with a pre-filled summary of the linked page and a top-level comment with your thoughts.
akash @ 2023-10-23T00:36 (+3)
Couldn't the comment section under the episode announcement posts (like this one) serve the same purpose? Or are you imagining a different kind of discussion thread here?
ElliotJDavies @ 2023-10-23T17:55 (+3)
That is preciously what I am looking for. I feel a bit silly now, because I haven't noticed these previously.
FWIW, @80000_Hours I think the formatting "[#Ep] [podcast title]" is a much better title/formatting than how they were linked previous to the last episode.
ElliotJDavies @ 2024-06-17T22:49 (+5)
Sentient AI ≠AI Suffering.
Biological life forms experience unequal (asymmetrical) amounts of pleasure and pain. This asymmetry is important. It's why you cannot make up for starving someone for a week by giving them food for a week.
This is true for biological life, because a selection pressure was applied (evolution by natural selection). This selection pressure is necessitated by entropy, because it's easier to die than it is to live. Many circumstances result in death, only a narrow band of circumstances results in life. Incidentally, this is why you spend most of your life in a temperature controlled environment.
The crux: there's no reason to think that a similar selection effect is being applied to AI models. LLMs, if they were sentient, would be equally as likely to enjoy predicting the next token as to dislike predicting the next token.
ElliotJDavies @ 2024-11-14T14:09 (+3)
Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you're likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
David_Moss @ 2024-11-14T14:35 (+5)
Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task, and that significant reasons for wanting to compensate applicants are (i) a sense of justice, (ii) wanting to avoid the appearance of unreasonably demanding lots of unpaid labour from applicants, not just wanting to encourage applicants to complete the tasks[1].
So I agree that there are good reasons for wanting more people to be able to complete test tasks. But I think that doing so would potentially significantly increase costs to orgs, and that not compensating applicants would reduce costs to orgs by less than one might imagine.
I also think the justice-implications of compensating applicants are unclear (offering pay for longer tasks may make them more accessible to poorer applicants)
- ^
I think that many applicants are highly motivated to complete tasks, in order to have a chance of getting the job.
ElliotJDavies @ 2024-11-14T14:48 (+2)
It takes a significant amount of time to mark a test task. But this can be fixed by just adjusting the height of the screening bar, as opposed to using credentialist and biased methods (like looking at someone's LinkedIn profile or CV).
My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task
This is an empirical question, and I suspect is not true. For example, it took me 10 minutes to mark each candidates 1 hour test task. So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.
I also think the justice-implications of compensating applicants are unclear (offering pay for longer tasks may make them more accessible to poorer applicants)
This is a good point.
Ben Millwood🔸 @ 2024-11-15T00:10 (+6)
So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.
Strictly speaking your salary is the wrong number here. At a minimum, you want to use the cost to the org of your work, which is your salary + other costs of employing you (and I've seen estimates of the other costs at 50-100% of salary). In reality, the org of course values your work more highly than the amount they pay to acquire it (otherwise... why would they acquire it at that rate) so your value per hour is higher still. Keeping in mind that the pay for work tasks generally isn't that high, it seems pretty plausible to me that the assessment cost is primarily staff time and not money.
David_Moss @ 2024-11-14T15:02 (+4)
It takes a significant amount of time to mark a test task. But this can be fixed by just adjusting the height of the screening bar, as opposed to using credentialist and biased methods (like looking at someone's LinkedIn profile or CV).
Whether or not to use "credentialist and biased methods (like looking at someone's LinkedIn profile or CV)" seems orthogonal to the discussion at hand?
The key issue seems to be that if you raise the screening bar, then you would be admitting fewer applicants to the task (the opposite of the original intention).
This is an empirical question, and I suspect is not true. For example, it took me 10 minutes to mark each candidates 1 hour test task. So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.
This will definitely vary by org and by task. But many EA orgs report valuing their staff's time extremely highly. And my impression is that both grading longer tasks and then processing the additional applicants (many orgs will also feel compelled to offer at least some feedback if a candidate has completed a multi-hour task) will often take much longer than 10 minutes total.