ChatGPT took second place in the spacecraft control simulation, beating out specialized autonomous systems. The experiment showed that large language models are able to perform complex engineering tasks even outside their traditional areas of application.
As part of an international competition for autonomous spacecraft control based on the Kerbal Space Program simulator, the research team tested the capabilities of a large language model in a real-world flight simulation. The experiment focuses on the task of intercepting a satellite using a guided tracking device.
The model, which is similar in architecture to ChatGPT, received a text instruction: "You're acting as an autonomous agent controlling a catch-up spacecraft." Based on it, the system began to form maneuvering strategies. After a series of iterations and fine-tuning of promptov, it managed to beat most of its competitors-specialized agents designed for narrow tasks-and took second place.
This result has become an important argument in favor of using language models for autonomous navigation in space. Unlike classic solutions that require long training, configuration, and debugging, LLMs work on the principle of instant output, using a text description of the current situation and providing a solution in the form of text recommendations. These recommendations are then translated into commands that control the device's behavior in the simulator.
The competition used complex scenarios, including evasive maneuvers, targeting, orientation and interception of objects. Despite the limited resources compared to traditional autonomous AI systems, LLMs have shown not only adaptability, but also the ability to independently interpret physical laws and navigation logic in a game-like but close to reality environment.
The authors of the development emphasize that the main problem remains "hallucinations" — undesirable and incorrect conclusions inherent in language models. In real-world space flight conditions, such errors can have critical consequences. However, the fact that most of the tests were successfully completed with minimal adjustments demonstrates the potential of this approach in future aerospace developments.
The competition, which was held as part of the Kerbal Space Program Differential Game Challenge project, was created as an open platform for testing and developing autonomous systems. This is a space for experiments, where you can explore the limits of AI capabilities in the context of physical simulations-from evasion to interception and orientation in orbit.
The results of the LLM team will be published in a scientific journal Advances in Space Research. The described method can become the basis for the next generation of autonomous agents capable of independently controlling satellites, interplanetary probes and robots in conditions where signal delay makes it impossible for human participation.
Although experiments are still being conducted in a simulated environment, progress in this area indicates a major paradigm shift. : A text-based AI model trained in Internet content and natural language is able to confidently navigate applied engineering tasks. This opens up opportunities for accelerated development of stand-alone solutions and increased reliability in environments where the cost of error is extremely high.