Anurag Maurya 1 · Tashmoy Ghosh 1 · Anh Nguyen 2 · Ravi Prakash 1
1 Human Interactive Robotics Lab, IISc Bangalore
2 Department of Computer Science, University of Liverpool, UK
![]() Trajectory of a drone surveilling an area |
![]() Instruction: "Can you approach person closely and slowly?" |
Adapting trajectories to dynamic situations and user preferences is crucial for robot operation in unstructured environments with non-expert users. Natural language enables users to express these adjustments in an interactive manner. We introduce OVITA, an interpretable, open-vocabulary, language- driven framework designed for adapting robot trajectories in dynamic and novel situations based on human instructions. OVITA leverages multiple pre-trained Large Language Models (LLMs) to integrate user commands into trajectories generated by motion planners or those learned through demonstrations. OVITA employs code as an adaptation policy generated by an LLM, enabling users to adjust individual waypoints, thus providing flexible control. Another LLM, which acts as a code explainer, removes the need for expert users, enabling intuitive interactions. The efficacy and significance of the proposed OVITA framework is demonstrated through extensive simulations and real-world environments with diverse tasks involving spatiotemporal variations on heterogeneous robotic platforms such as a KUKA IIWA robot manipulator, Clearpath Jackal ground robot, and CrazyFlie drone.
Pre-requisites:
Clone this repository with
cd ~
git clone https://github.com/anurag1000101/OVITA.git
cd OVITA
conda env create -f environment.yaml
conda activate ovita
pip install -e .To try out the agent:
-
Save your your API keys as environment variables:
OPENAI_API_KEY = "your_openai_api_key" GEMINI_API_KEY = "your_gemini_api_key" CLAUDE_API_KEY = "your_claude_api_key"
-
Run the agent with:
python scripts/main.py --trajectory_path <path_to_trajectory> --save_dir <path_to_save_directory> --llm <openai|gemini|claude> --save_results <True|False> --robot_type <robot_name_or_None>
streamlit run ~/<Path to GUI File>/main_gui_streamlit.pySteps to Adapt Trajectory:
- Steps to Adapt Trajectory:
- Upload the trajectory file via the navigator.
- Inspect the original trajectory in the Plotly-rendered view.
- In the sidebar, enter adaptation instructions, choose the LLM, and set the robot type to LaTTe.
- Click Run Adaptation and wait for it to complete.
- Select Zero-Shot as the trajectory view and inspect the modified trajectory.
- If further adjustments are needed, provide feedback, select the context type, and click Run Adaptation.
- Select Final as the trajectory view. Repeat steps 6 and 7 until satisfied.
- Play around with the CSM configs to achieve best results. Have a look at the config.py file for more fine-graiend control over params.
- To reset to the initial modified results, press Reset. Repeat step 8 until satisfied.
- Once satisfied, browse for the respective directory and press Save to save the results.
📌 Using Your Own Trajectory: Ensure your trajectory file is a JSON file with the following structure:
{
"trajectory": [[x, y, z, speed], [x, y, z, speed], ...],
"instruction": "your instruction here; can be given directly in the GUI too",
"objects": [
{
"name": "person",
"x": 1.0,
"y": 0.11,
"z": 0.8
},
...
],
"Env_descp": "Any environment description you want to give"
}If you use our work or code base(s), please cite our article:
@article{ovita2025,
title={OVITA: Open Vocabulary Interpretable Trajectory Adaptations},
author={Anurag Maurya and Tashmoy Ghosh and Anh Nguyen and Ravi Prakash},
year={2025}
}
If you have any questions, reach out to -- [email protected] , [email protected]

