Funding Opportunity: AI in LMICs (100k per project, Deadline June 5)

By Constance Li @ 2023-05-30T13:42 (+33)

This is a linkpost to https://gcgh.grandchallenges.org/challenge/catalyzing-equitable-artificial-intelligence-ai-use

I wanted to share an exciting funding opportunity by the Bill & Melinda Gates Foundation for projects that leverage Artificial Intelligence (AI), with a focus on low- and middle-income countries (LMICs). This funding opportunity aims to harness the power of Large Language Models (LLMs), including ChatGPT-4, to address challenges and generate evidence in various sectors.

Applications for this funding opportunity opened just last week and will close on June 5, 2023. Given the short timeline, there is a high likelihood of limited competition, presenting an excellent chance for smaller, scrappier organizations to secure funding.

Key Details:

I think the incorporation of ChatGPT-4 is of particular significance since Microsoft invested $10 billion into ChatGPT.

Thanks and Shameless Plug: 

I first heard about this funding opportunity thanks to Cameron King from Animal Advocacy Africa through the Impactful Animal Advocacy slack group, which you can join here. It's been a great platform for innovative/cross-discipline/international collaboration in all areas of animal advocacy and I highly recommend joining if this sounds appealing to you. 
 

Example Project Ideas (from ChatGPT4) to Kickstart Creative Thinking: 

A Final Note:
As the funding landscape in the effective altruism world continues to shift towards AI alignment/safety, it becomes increasingly important for global health and animal welfare charities to explore alternative funding sources. This opportunity from the Gates Foundation can serve as a valuable step towards diversifying funding streams and supporting impactful projects in LMICs.

It's time for neartermism to get back on the funding dating scene

Oisín Considine @ 2023-05-31T11:10 (+7)

Is it possible to edit the title so it says "Deadline June 5th" instead of "Deadline 6/5" as many people who could potentially be interested in this might look at the title and think the deadline was the 6th of May and thus scroll past the post? Most of the world uses the DD/MM/YY (sometimes also YY/MM/DD) format, so I would imagine that this small change could help a lot and attract more potential applicants.

This seems like a really fantastic opportunity and it would be a great pity if some were to ignore it who otherwise would have been interested in applying simply because they misinterpreted the deadline as being the 6th of May instead of the 5th of June.

Constance Li @ 2023-05-31T11:40 (+4)

The edit has been made! Thank you for helping me overcome my Americo-centric framework of the world. :)

Arno @ 2023-06-01T06:12 (+1)

Thanks Constance! 

Let me know if you're looking to apply and want to do something together. I have a few ideas around this area as someone that has worked in Sub Saharan Africa for 10+ years now, and actually wrote a small post around this subject ((1) Large Language Models for Development: Why Information Matters (thegpi.org)
 

Constance Li @ 2023-06-01T11:18 (+1)

Arno,

Thanks for your engagement and your past writing on LLM's and LDC's. I was not personally looking to apply, but if I have the right partner I would consider it. I have many thoughts about this topic in general too and would be happy to chat. I'll dm you my calendly.

I'm already getting a AI use-case brainstorming session together for animal advocates. Perhaps this can be done for global health/development as well. I recently went to a webinar by deeplearning.ai that demonstrated how to train 2 different LLM's in under 1 hour using a highly efficient tech stack. I think that the problem with outdated info that you mentioned in your blog post can easily be overcome by training up a targeted LLM with up to date information and then assessing it against benchmark data. 

If you are interested, here is the full webinar: Building with Instruction-Tuned LLMs: A Step-by-Step Guide by Deep Learning AI

And here is a summary of the webinar I made using the following tech stack:
otter.ai speech to text (STT) tool for transcribing --> GPT3.5 for summarizing the large amount of transcribed text due to it's larger context window --> Google docs for finding and replacing typos for transcribing errors like QLoRA --> GPT 4 for summarizing on a more advanced level

~~start of AI content

The video demonstrated the process of building and fine-tuning two large language models (LLMs). It highlighted the importance of instruction tuning, which aligns the model with human expectations in terms of bias, truthfulness, toxicity, etc., and fine-tuning, which refines the model for specific tasks.Several tools and methods were mentioned for the fine-tuning process:

  • Dolly 15k, a dataset with 15,000 high-quality human-generated prompt-response pairs.
  • Open Llama, a commercial language model that can be fine-tuned.
  • QLoRA, a fine-tuning method that reduces the model's complexity, and is useful for tasks with lower dimensions.
  • The supervised biometric tuning trader library, a tool that facilitates the fine-tuning process.
  • They also talked about the use of quantization, which reduces the size of weight matrices, optimizing computing resources. This is particularly useful when using limited resources such as Google Colab, which was mentioned as a viable platform for training these models.
  • Two methods of fine-tuning were discussed: supervised and unsupervised. Supervised fine-tuning involves using clearly labeled instructions to train the model, while unsupervised fine-tuning allows the model to learn without specific targets or labels. Both methods have their advantages and drawbacks: supervised fine-tuning requires more time to organize the dataset, while unsupervised fine-tuning can be done faster.
  • The presenters demonstrated the process of fine-tuning using both real and synthetic data. Synthetic data, generated by GPT-4, was used to demonstrate the process of fine-tuning a model for generating marketing emails.
  • The webinar concluded with the reminder to continuously monitor metrics and evaluate the performance of the models for specific tasks, emphasizing that building LLMs can be done by anyone without needing vast computational resources, especially with tools like QLoRA. They provided a GitHub repo for resources and examples for prompt engineering and fine-tuning.

This instructional video demonstrated the value of building and fine-tuning Large Language Models, and how this can be achieved even with limited resources. It provides a comprehensive guide on how to approach this complex task, and offers insights on optimizing performance and efficiency.

~~end of AI content

Please note that I have no tech background whatsoever and only recently started seriously diving into AI 1 month ago so any errors in phrasing or concepts is a result of me still coming up on the learning curve. If anyone has any corrections to the stuff I said here, PLEASE let me know!