You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for your incredible work on this project! I appreciate the time and effort you have put into making this project accessible to the community.
Does this project use large models running locally, or does it use APIs?
Given the condition of using ai large models running locally, I have the following questions.
Could you kindly suggest the minimum and recommended hardware requirements for deployment on a wheeled robot and in the simulation respectively, including recommendations for GPU memory, RAM, and CPU? Could it run on GPU lower than 4090? Your guidance would be extremely helpful in determining the optimal hardware configuration.
Once again, thank you for your incredible work and support. It’s a privilege to learn from your contributions!
The text was updated successfully, but these errors were encountered:
For local deployment with open-source models, it is recommended to explore VLN-specialized models, typically lightweight transformers with fewer than 200M parameters, which are capable of running on a 2080 GPU. However, further research and design considerations are necessary to address the sim-to-real gap effectively.
For LLM-based models, it is advisable to deploy them on a server and establish communication with the terminal robot, as most current research relies on 7B models without quantization.
First of all, thank you for your incredible work on this project! I appreciate the time and effort you have put into making this project accessible to the community.
Does this project use large models running locally, or does it use APIs?
Given the condition of using ai large models running locally, I have the following questions.
Could you kindly suggest the minimum and recommended hardware requirements for deployment on a wheeled robot and in the simulation respectively, including recommendations for GPU memory, RAM, and CPU? Could it run on GPU lower than 4090? Your guidance would be extremely helpful in determining the optimal hardware configuration.
Once again, thank you for your incredible work and support. It’s a privilege to learn from your contributions!
The text was updated successfully, but these errors were encountered: