OPTIC [Forecasting Comp] — Pilot Postmortem

By OPTIC, Saul Munn, Tom Shlomi, Jingyi_Wang @ 2023-05-19T10:10 (+43)

OPTIC is an in-person, intercollegiate forecasting competition where undergraduate forecasters compete to make accurate predictions about the future. Think olympiad/debate tournament/hackathon, but for forecasting — teams compete for thousands of dollars in cash prizes on question topics ranging from geopolitics to celebrity twitter patterns to financial asset prices.

We ran the pilot event on Saturday, April 22 in Boston and are scaling up to an academic league/olympiad. We’ll be hosting tournaments in Boston, London, and San Francisco in the fall — see our website at opticforecasting.com, and contact us at opticforecasting@gmail.com (or by dropping a comment below)!

 

What happened at the competition?

Attendance

114 competitors from 5 different countries and 13 different US states initially registered interest. A significant proportion indicated that they wouldn’t be able to compete in this iteration (logistical/scheduling concerns), but expressed interest to compete in the next one. 39 competitors RSVP’d “yes,” though a few didn’t end up attending and a couple unregistered competitors did show up. At the competition, the total attendance was 31 competitors in 8 teams of 3-4, with 2 spectators.

Schedule

Forecasting (teams, platform, scoring, prizes, etc)

Competitors were split up into teams of 3-4. They submitted one forecast per team on each of 30 questions through a private tournament on Metaculus. Teams’ forecasts were not made visible to other teams until after the forecasting period closed. Questions were a mix of binary and continuous, all with a resolution timeframe of weeks-months; all will have resolved by August 15. At that point, we’ll score the forecasts using log scoring.

We will have awarded $3000 in cash prizes, to be distributed after the scoring is completed:

Note that prizes for 1st-3rd place are given to the team and split between the members of the team.

Funding

We received $4000 USD from the ACX Forecasting Mini-Grants on Manifund, and $2000 USD from the Long Term Future Fund.

Organizers

Our organizing team comprises: 

Left to right: Tom, Saul, Jingyi

Also, Saul and Jingyi will be attending EAG London — please reach out if you want to be involved with OPTIC, have questions/comments/concerns, or just want to chat!


 

***


 

The following is a postmortem we wrote based on the recording of a verbal postmortem our team held after the event.

Summary

Overall, the pilot went really well. We did especially well with setting ourselves up for future iterations, with flexibility/adaptability, and resource use. We could have improved time management and communication, as well as some other minor issues. We’re excited about the future of OPTIC!


 

What went well

Strong pilot/good setup

As a pilot, the April 22 event definitely has set us up for future iterations of OPTIC. We now have a network of previous team captains and competitors from schools all around the Boston area (and beyond) who have indicated that they’d be excited to compete again. We have people set up at a few schools around the country who are going to start forecasting clubs which will compete as teams in forecasting tournaments. We have undergraduate interest (and associated emails) from five countries. We are connected with a myriad of people in the forecasting space who have provided extremely helpful advice for OPTIC. 

Flexibility

The early image that we had of OPTIC in February was very different from what happened on April 22. Throughout the course of 2.5 months, we were able to adapt and shift key decisions based on new information and advice (prediction market vs forecasting tournament, gameable & conditional questions, venue choices, etc). 

Good resource use/access (time notwithstanding)

In terms of connecting with people in the forecasting space, we used our resources very well. We received advice from many experienced people. We regularly reached out to people we thought could help us in some way (and often, they did). 

Saul attended EAG Bay Area, where he was able to discuss OPTIC plans and decisions with several really helpful people. Prior to attending EAG, Saul and the team created a “Key Decision Document” that listed all of the crucial decisions that needed answers or better understanding — this was an invaluable way to set up conversations with those who had expertise around legitimately important decisions. This was hugely influential in determining several key decisions – resolution timeline, question topics, etc.


 


 

What could have been improved

Time management

We were extremely time-constrained, especially close to the competition date. Partly, this was because much of the 2.5 months was spent figuring out exactly what we wanted OPTIC to be. 

We didn’t have a set timeline at the beginning of the 2.5 months, which would have helped keep us on track. There were no explicit deadlines for tasks like “book a venue” or “find team captains” on longer timescales — week-to-week/month-to-month.

Our meetings were irregularly/haphazardly scheduled, so sometimes we couldn’t check things over with each other in a timely manner.

Communication

Internal

Naturally, because we’re all full-time students, we're not able to be as responsive as we'd like. However, there were times when this really ate into our “time crunch” problem.

External

The internal time crunch also negatively affected our communication with team captains and competitors. In particular, team captains, as organizing agents, needed to receive certain information much earlier than they did. This resulted in unnecessary confusion/miscommunication between the core organizers <> team captains <> competitors.

We took too long to respond to messages/emails. We left some emails in our inbox for as long as a week; this should not have happened.

Forecasting Questions

There were a few logistical issues with the forecasting questions at the event – e.g., one of the questions was set up as continuous when it should have been discrete. 

The questions could have covered a more diverse range of topics.

The questions were built pretty last minute, which left little time to edit/improve/ask for feedback.

Event logistics

The forecasting portion of the event was a continuous 3 hour chunk where competitors made predictions on the 30 questions. We received feedback from several competitors that this was too long. Also, some competitors finished their forecasting early, and left before the official “end” of the event. While this wasn’t a bad thing, it meant there was no definite conclusion — more of a trickle of competitors leaving over the last half-hour.

We also experienced distressingly high uncertainty with our venue booking up to a week before the event — unexpected complications of booking spaces at universities happened again and again.

Actionable steps for the future

Timeline

We will create a timeline for our next OPTIC event early on. Importantly, it will include deadlines – e.g.,  have the venue definitively booked by x date, have all team captains chosen by y date, have all departmental email blasts sent by z date, etc.

It will include buffer time between our last non-event deadline and the event, to avoid the last-minute crunch we experienced with the pilot.

Set weekly meeting

We have set a recurring weekly meeting time. This way, despite what communication does or doesn’t happen in a week, we have concrete and dedicated OPTIC time with each other.

If we don’t have much to discuss, it’ll just be a quick check-in.

Change event logistics

In response to notes that the forecasting part of the event was too long, we may experiment with different scheduling logistics in the future. For example, instead of one 3-hour forecasting period, we could do a small forecasting period, lunch, then a second small forecasting period. 

Email response speed

Rather than waiting until a meeting for everybody to look over an important email, a quick call/text from another co-organizer will suffice.

For basic emails, we have appointed a general “email person” (Saul) who can respond to emails independently, without conferring with the other two organizers.


 

Special thanks to…

Seth Blumberg, for your insightful and inspiring speech

Nate MorrisonAnastasia MilianoDan Schwarz, as well as the rest of the Metaculus team, for setting up a private OPTIC market and for technical troubleshooting

Scott Alexander and Austin Chen, for hosting the ACX/Manifund forecasting mini-grantspubbing us on ACX (thanks Scott!), and offering advice (thanks Austin!)

Dominic Denicola, for investing your confidence and money in us and OPTIC, offering advice, and finding us an amazing speaker

Juan Gil and Nikola Jurkovic, for your consistent help, advice, and support

Henry Tolchard, for your help and experience from the 2022 online forecasting tournament

Sadie Giddings, for designing beautiful merch

Also, thanks to the following people for general advice and feedback: Ozzie GooenCatherine LowHamish HuggardWrenata SproatAdam BinksFinn HamblyArunim Argawal 


 

How you can help

You can help us out by:

Err on the side of dropping a comment — we’ve found the thoughts people give us are a lot more helpful than they sometimes think.

Also, to repeat: Saul and Jingyi will be attending EAG London — please reach out if you want to be involved with OPTIC, have questions/comments/concerns, or just want to chat!


 


Jason @ 2023-05-19T12:47 (+7)

I'm curious whether there is a viable way to add a short quick-resolving round, either with a subset of questions that will resolve within days or by adding another discipline like Fermi estimation (e.g., https://www.quantifiedintuitions.org/estimation-game)

Human psychology being what it is, getting some sort of results sooner may be reinforcing. From a recruiting/buzz perspective, "our school's team won a prize at a forecasting tournament last week!" is a lot easier to pitch to the campus newspaper than "we actually won this thing three months ago but just found out..."

Marcel D @ 2023-05-28T02:48 (+4)

I was also going to recommend this, but I’ll just add an implementation idea (which IDK if I fully endorse): you could try to recruit a few superforecasters or subject-matter experts (SMEs) in given field to provide forecasts on the questions at the same time, then have a reciprocal scoring element (I.e., who came closest to the superforecasters’/SMEs’ forecasts). This is basically what was done in the 2022 Existential Risk Persuasion/Forecasting Tournament (XPT), which Philip Tetlock ran (and I participated in). IDK when the study results for that tournament will be out, and maybe it won’t recommend reciprocal scoring, but it definitely seems worth considering.

A separate idea (which again IDK if I fully endorse but was also in the XPT): have people provide dense rationales for a few big forecasts, then you can rate them on the merits of their rationales. (Yes, this involves subjectivity, but it’s not very different from speech and debate tournaments; the bigger problem could be the time required to review the rationales, but even this definitely seems manageable, especially if you provide a clear rubric, as is common in some competitive speech leagues.)

Jason @ 2023-05-28T18:33 (+2)

A trial of #2 would have some information value -- you could discern how strong the correlation was between the rationale scores and final standings to decide if rationales were a good way to produce a same-week result.

Maybe you could also use idea #1 with only the top-scoring teams making it to the rationale round, to cut down on time spent scoring rationales?

Marcel D @ 2023-05-28T21:35 (+2)

TBH, I think that the time spent scoring rationales is probably quite manageable: I don’t think it should take longer than 30 person-minutes to decently judge each rationale (e.g., have three judges each spend 10 minutes evaluating each), maybe less? It might be difficult to have results within 1-2 hours if you don’t have that many judges, but probably it should be available by the end of the day.

To be clear, I was thinking that only a small number (no more than three, maybe just two) of the total questions should be “rationale questions.”

But definitely the information value of “do rationale scores correlate with performance” would be interesting! I’m not sure if the literature has ever done this (I don’t think I’ve encountered anything like that, but I haven’t actively searched for it)

Saul Munn @ 2023-05-20T01:58 (+4)

great points!

agreed, quick feedback loops are vital for good engagement + learning. we couldn't figure out a good way to do it for the pilot, but this is definitely something we're interested in building out for the next competition.

also, fermi estimation is a great idea — jane street sometimes runs an (unaffiliated) estimathon, but it would be cool to build in an estimation round, or something along those lines. do you have any other ideas for quickly-resolving rounds?

thanks for your thoughts!

~ saul

dschwarz @ 2023-05-20T17:38 (+2)

Metaculus is getting better at writing quickly-resolving questions, and we can probably help write some good ones for the next iteration of OPTIC.

There's a certain eye for news that is interesting, forecastable, and short-term one develops. Our Beginner tournaments (current, 2023 Q1, 2022 Q4) explicitly only have questions that resolve within 1 week, so you can see some inspiration there.

 

Saul Munn @ 2023-05-23T14:36 (+1)

yeah, i agree — i think we'll probably rely more heavily on questions in that style for the next iteration of OPTIC. i don't think we relied enough on existing questions/tournaments (see here).

vandemonian @ 2023-05-19T18:42 (+3)

love the idea of a hackathon for forecasting

btw your website is so aesthetic, i love the text effects~

Saul Munn @ 2023-05-19T22:49 (+2)

thank you! :)

Jason @ 2023-05-19T12:34 (+3)

Quick suggestion on the title: If many potential readers may not know what OPTIC is, could change to OPTIC (forecasting comp) Pilot Postmortem. On mobile at least, the reader doesn't get much context at all before deciding to click.

OPTIC @ 2023-05-19T13:07 (+2)

Good point! Changed :)

dschwarz @ 2023-05-20T17:54 (+2)

Nicely done! The college campus forecasting clubs and competition model feels extremely promising to me. Really great to see a dedicated effort start to take off.

I'm especially happy to see an ACX Manifund mini-grant get realized so quickly. I admit I was skeptical of these grants.

Excited to see the next iteration of this, and hopefully many more to come on college campuses all over!

Saul Munn @ 2023-05-23T14:39 (+1)

thanks for the kind words — and thanks for helping OPTIC become a reality! :)