People

Zenna Tavares

Co-founder and Director, Basis Research Institute

Zenna Tavares builds computational and statistical tools for causal reasoning, probabilistic programming, and scientific model discovery. His work asks how machines can reason from observation and interaction in ways that support scientific and societal problem-solving.

Position

  • Co-founder and Director
    Basis Research Institute

Education

  • PhD, Cognitive Science and Statistics
    Massachusetts Institute of Technology

Previous Appointments

  • Innovation Scholar
    Columbia Zuckerman Institute and Data Science Institute
  • Postdoctoral Fellow
    MIT CSAIL
Illustrated city office with a person writing systems equations on a chalkboard

About

My research aims to understand human reasoning: how people derive knowledge from observing and interacting with the world. I develop computational and statistical tools for causal reasoning, probabilistic programming, and scientific model discovery.

Projects

Current and recent Basis projects.

MARA

Modeling, Abstraction, and Reasoning Agents: systems that build and use world models through active experimentation and abstract reasoning.

R-ADA

A rational automated design agent for robotics, combining language models, simulation, probabilistic programming, and Bayesian inference.

Citymaking through participatory modeling

Participatory city models that help residents, community groups, and policymakers reason about uncertain causes and policy consequences.

Recent Publications

Recent papers and preprints.

Articles

Basis essays and updates this person wrote or contributed to.

ExoPredicator: Abstracting Time and State for Robot Planning

April 22, 2026

We introduce ExoPredicator, a system that learns abstract world models for robot planning. By abstracting state, time, and both endogenous and exogenous causal processes, ExoPredicator enables robots to quickly learn how dynamic environments work and plan efficiently in them.

AutumnBench: World Model Learning in Humans and AI

July 17, 2025

We’re releasing a new version of Autumn with human baseline results, AI performance comparisons, and an interactive benchmark for world model discovery. This release includes the MARA protocol and provides a public platform for testing causal reasoning capabilities.

Project MARA Preview: Modeling, Abstraction, and Reasoning Agents

December 6, 2024

Project MARA aims to develop AI systems capable of performing everyday scientific discovery through active experimentation and abstract reasoning. The project will create systems that can discover and apply causal models across diverse domains, from physical robotics to digital interfaces.

NeuroAI for AI Safety

November 27, 2024

Basis contributed to a new technical roadmap, “NeuroAI for AI Safety,” from Amaranth Foundation. The roadmap aims to make AI systems safer by understanding and implementing the brain’s approach to intelligent behavior.

MetaCOG: Enhancing AI Vision with Human-Inspired Metacognition

July 16, 2024

In collaboration with Marlene Berke and the Computational Social Cognition Lab at Yale, we’re introducing MetaCOG, a probabilistic model that can learn a metacognitive model of a neural object detector and use it to improve the detector’s accuracy without feedback. This represents a step towards building AI systems that can go beyond representing their inputs and also represent their own thought processes.

Autumn: Causal Discovery Through Program Synthesis

February 1, 2023

We’re introducing AutumnSynth, an algorithm that synthesizes the source code of simple 2D video games from a small amount of observed video data. This represents a step forward toward systems that can perform causal theory discovery in real-world environments.