What’s next?

We've outlined seven possible ways NeuroAI can transform AI safety, which point to clear, actionable priorities.

  1. Prioritize coordinated, large-scale efforts in neurotechnology development. Specifically, we need to:
    1. Accelerate the development of high-bandwidth neural interfaces, aiming to reduce the doubling time for electrophysiology capabilities to 2 years
    2. Scale up naturalistic neural recordings during rich behavioral tasks in both animals and humans
    3. Build detailed virtual animal models with sophisticated bodies and environments
    4. Pursue bottom-up circuit reconstruction, including whole mouse cortex simulation
  1. Foster distributed academic research in crucial areas:
    1. Improve AI robustness through neural data augmentation
    2. Advance tools for mechanistic interpretability
    3. Create benchmarks for human-aligned representation learning
    4. Develop stronger theoretical frameworks for understanding when and why brain-inspired approaches enhance safety
  1. Focus deliberately on safety over capabilities. While much of NeuroAI has historically aimed to enhance AI capabilities, we've identified several promising approaches that could improve safety without dramatically increasing capabilities.
  2. Break down research silos between AI safety, neuroscience, and AI development. The insights needed to build safer AI systems often live at the intersection of these fields, yet researchers in these areas rarely interact meaningfully.

These aren't just incremental, independent steps – together they are the foundation of this differentially safer path forward. Better recording technologies enable more detailed digital twin models, which in turn inform better cognitive architectures. Improved interpretability methods help us validate our understanding of neural circuits, leading to more effective training objectives. We need to build new tools, scale data collection with existing tools, and develop new theoretical frameworks. Most importantly, we need to move quickly yet thoughtfully – the window of opportunity to impact AI development may not stay open indefinitely.

The development of safe AI systems is not inevitable; it requires sustained investment and a focused research effort. But this effort offers a unique opportunity: a path that not only leads to safer AI, but also helps us understand the brain, advances neurotechnology, and accelerates treatments for neurological disease. We hope you – whether you’re a scientist, engineer, funder, research institution, company, or policymaker – will join us.


Email us at neuroaisafety@amaranth.foundation, or sign up to our newsletter for updates.