MIT FutureTech are hiring a Postdoctoral Associate to work on AI Performance and Safety
By PeterSlattery @ 2025-07-08T14:05 (+7)
Why apply or share?
- Our work to understand progress in computing and artificial intelligence, and its implications, is highly relevant to understanding and mitigating the risks of AI.
- We are one of Open Philanthropy's 10 largest Global Catastrophic Risk, and AI Governance, grantees.
- This position involves working with Jonathan Rosenfeld, one of the pioneers of Deep Learning scaling laws, on AI Safety related topics.
- You will help found the Fundamental AI Group within FutureTech and can help shape the direction of that group.
Position Overview
We are looking for the founding members for the Fundamental AI Group within FutureTech, positioned at the intersection of experimental and theoretical deep learning. Our primary focus is uncovering, explaining, and extending the limits of performant and safe AI systems and their far-reaching implications. Our projects span a broad spectrum, ranging from understanding and challenging the learning limits to adding coherence and predictability at the boundaries of AI efficiency and safety.
As a Postdoctoral Associate in AI Performance & Safety, you will conduct cutting-edge empirical research to better understand and guide the behavior of advanced AI systems. Your work will focus on: You will design, implement, and analyze machine learning experiments that explore the fundamental limits of AI performance and efficiency. The research you will work on addresses core questions, for example “What are these performance limits?
How far are we from them? and what would it take to close them? Additionally, the research you will work on will concurrently study the limits of safety and alignment, and address core questions such as “Can you predict if a system is “safe” before you train it? Why can you or can’t you predict this? Under what circumstances can this be done?” and “What are the implications for controlling ever-performant AI systems?”.
Key Responsibilities:
- Design and conduct AI experiments to test scaling of performance, robustness, safety, and generalization.
- Develop agentful arenas for the evaluation of agents and their modifications
- Build tools and frameworks for evaluating AI alignment techniques.
- Collaborate with interdisciplinary teams to integrate AI safety insights into research and policy recommendations.
- Contribute to research papers, white papers, and public reports on AI safety and governance.
- Participate in workshops, conferences, and collaborations with external AI research organizations.
You May Be a Good Fit If You:
- Have a Ph.D. in Computer Science, Machine Learning, AI, or a related field.
- Have hands-on experience with deep learning frameworks (TensorFlow, PyTorch).
- Hands on software engineering taste and experience and in particular with small and large scale model training, fine tuning, RL based reasoning and common practices including RAG, etc.
- Have conducted empirical AI research, particularly in scaling, safety, interpretability, or multi-agent systems.
- Enjoy working in fast-moving, collaborative research environments.
Location
Cambridge, Massachusetts, USA is preferred, but remote/hybrid is also possible.
Salary
$65,000 - $75,000 (MIT HR set these salaries, unfortunately).
About MIT FutureTech
MIT FutureTech is an interdisciplinary group of computer scientists, engineers, and economists at MIT who study the foundations of progress in computing and Artificial Intelligence: the trends, implications, opportunities, and risks.
Economic and social change is underpinned by advances in computing: for instance, improvements in the miniaturization of integrated circuits, the discovery, and refinement of algorithms, and the development and diffusion of better software systems and processes. We aim to identify and understand the trends in computing that create opportunities or risks and help leaders in computing, scientific funding bodies, and government to respond appropriately.
Our research helps to answer important questions including: Will AI progress accelerate or decline – and should it? What are the bottlenecks to growth from AI, and how might they be solved? What are the risks from AI, and how can we mitigate them?
To support our research, we run seminars and conferences to better connect the field of computer scientists, economists, and innovation scholars to build a thriving global research community.
We advise governments, nonprofits and industry, including via National Academies panels on transformational technologies and scientific reliability, the Council on Competitiveness’ National Commission on Innovation and Competitiveness Frontiers, and the National Science Foundation’s National Network for Critical Technology Assessment.
Our work has been funded by Open Philanthropy, the National Science Foundation, Microsoft, Accenture, IBM, the MIT-Air Force AI accelerator, and the MIT Lincoln Laboratory, and widely cited, including by the 2024 Economic Report of the President.
Some of our recent outputs:
- Economic impacts of AI-augmented R&D
- The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks from Artificial Intelligence
- Algorithmic progress in language models
- Beyond AI Exposure: Which Tasks are Cost-Effective to Automate with Computer Vision?
- Explosive growth from AI automation: A review of the arguments
- How industry is dominating AI research
- The Quantum Tortoise and the Classical Hare: A simple framework for understanding which problems quantum computing will accelerate (and which it will not)
- There’s plenty of room at the Top: What will drive computer performance after Moore’s law?
- Deep Learning's Diminishing Returns: The Cost of Improvement is Becoming Unsustainable
- America’s lead in advanced computing is almost gone
Some recent articles about our research:
- Techcrunch: MIT researchers release a repository of AI risks
- CNN: AI and the labor market: MIT study findings
- TIME: AI job replacement fears and the MIT study
- Boston Globe: AI's impact on jobs according to MIT
You will be working with Dr. Neil Thompson, the Director of MIT FutureTech. Prior to starting FutureTech, Dr. Thompson was a professor of Innovation and Strategy at the MIT Sloan School of Management. His PhD is in Business & Public Policy from Berkeley. He also holds Master’s degrees in: Computer Science (Berkeley), Economics (London School of Economics), and Statistics (Berkeley). Prior to joining academia, Dr. Thompson was a management consultant with Bain & Company, and worked for the Canadian Government and the United Nations.
About the MIT Computer Science and Artificial Intelligence Lab (CSAIL)
CSAIL is one of the world’s top research centers for computer science and artificial intelligence (currently ranked #1). It has hosted 9 Turing awards winners (the “Nobel Prize of Computing”) and has pioneered many of the technologies that underpin computing.
How to apply
Please use this form to register interest in this role or to submit a general expression of interest.
Selected candidates will be first interviewed via Zoom. We are recruiting on a rolling basis and may close applications early if we find a suitable candidate, so please apply as soon as possible to maximize your chances.