
[Image courtesy of Anthropic]
The demonstration involved two drives on the rim of Jezero Crater in December 2025. On Dec. 8, Perseverance drove 689 feet (210 meters). Two days later, it drove 807 feet (246 meters), following waypoint plans produced using vision-language models rather than the usual fully human-drawn waypoint sequence, according to NASA.
The motivation is the time lag between Earth and Mars, which makes real-time “joystick” control impossible. One-way signal time varies with planetary alignment, ranging from roughly 4 minutes to roughly 24 minutes. (See ESA Blog Navigator for more details.)
For decades, rover teams have planned routes by analyzing terrain and rover status data, then sketching a path using waypoints that are typically spaced no more than 330 feet (100 meters) apart and uplinked via NASA’s Deep Space Network. That careful approach is partly shaped by hard lessons, including the 2009 Spirit rover incident, when Spirit became embedded in soft soil and NASA eventually ended efforts to free it.
The video above shows Perseverance’s POV: a 246-meter drive along Jezero Crater’s rim, reconstructed in 3D from Navcam imagery and rover telemetry.
How Claude Code helped with the rover
On Dec. 8 and 10, 2025, NASA’s Perseverance rover completed the first AI-planned drives on another planet—roughly 400 meters across Jezero Crater’s rim.
The process: JPL engineers used Claude Code to analyze HiRISE orbital imagery and digital elevation models, then generate waypoint sequences in Rover Markup Language—the same XML-based commands human drivers use.
Validation: Every AI-generated route ran through JPL’s digital twin simulation, checking 500,000+ telemetry variables before transmission to Mars.
Result: Engineers estimate the approach cuts route-planning time in half, enabling more drives and more science per mission cycle.
Source: NASA/JPL, Anthropic
In the Perseverance test, the AI analyzed high-resolution orbital imagery from the HiRISE camera on Mars Reconnaissance Orbiter and terrain-slope data from digital elevation models, identifying hazards such as bedrock, boulder fields, and sand ripples before generating a continuous path with waypoints. Engineers then validated the commands in JPL’s “digital twin” simulation, checking more than 500,000 telemetry variables to ensure compatibility with the rover’s flight software before transmitting the drive, accorrding to NASA.
The work was led out of JPL’s Rover Operations Center in collaboration with Anthropic, using Anthropic’s Claude models. In Anthropic’s account, JPL engineers found only minor changes were needed after reviewing ground-level images, including refining a narrow corridor where sand ripples were clearer from the rover’s perspective. Engineers also estimated the approach could cut route-planning time about in half.
“The fundamental elements of generative AI are showing a lot of promise in streamlining the pillars of autonomous navigation for off-planet driving: perception (seeing the rocks and ripples), localization (knowing where we are), and planning and control (deciding and executing the safest path),” said Vandi Verma, a space roboticist at JPL.




Tell Us What You Think!
You must be logged in to post a comment.