Skip to content

HuangCongQing/Embodied-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 

Repository files navigation

Embodied-AI

具身智能(Embodied AI)技术栈学习路线和相关资料

具身智能技术栈(学习路线)

具身智能资料推荐

具身智能公司 & 高校实验室

具身智能综述最新Review

  • [2025] A Survey on Efficient Vision-Language-Action Models [paper]

  • [2025] Vision-Language-Action Models for Robotics: A Review Towards Real-World Applications [paper]

  • [2025] Pure Vision Language Action (VLA) Models: A Comprehensive Survey [paper]

  • [2025] Large VLM-based Vision-Language-Action Models for Robotic Manipulation: A Survey [paper] [project]

  • [2025] A Survey on Vision-Language-Action Models: An Action Tokenization Perspective[paper]

  • [2025] Foundation Model Driven Robotics: A Comprehensive Review [paper]

  • [2025] A Survey on Vision-Language-Action Models for Autonomous Driving [paper] [project]

  • [2025] Parallels Between VLA Model Post-Training and Human Motor Learning: Progress, Challenges, and Trends [paper] [project]

  • [2025] A Survey on Vision-Language-Action Models for Embodied AI. [paper]

  • [2025] Foundation Models in Robotics: Applications, Challenges, and the Future [paper] [project]

  • [2025] Vision Language Action Models in Robotic Manipulation: A Systematic Review [paper]

  • [2025] Vision-Language-Action Models: Concepts, Progress, Applications and Challenges [paper]

  • [2025] OpenHelix: A Short Survey, Empirical Analysis, and Open-Source Dual-System VLA Model for Robotic Manipulation [paper][project]

  • [2025] Exploring Embodied Multimodal Large Models: Development, Datasets, and Future Directions [paper]

  • [2025] Multimodal Fusion and Vision-Language Models: A Survey for Robot Vision [paper] [project]

  • [2025] Generative Artificial Intelligence in Robotic Manipulation: A Survey [paper] [project]

  • [2025] Neural Brain: A Neuroscience-inspired Framework for Embodied Agents [paper] [project]

  • [2024] Aligning Cyber Space with Physical World: A Comprehensive Survey on Embodied AI. [paper]

  • [2024] A Survey on Robotics with Foundation Models: toward Embodied AI. [paper]

  • [2024] What Foundation Models can Bring for Robot Learning in Manipulation: A Survey. [paper]

  • [2024] Towards Generalist Robot Learning from Internet Video: A Survey. [paper]

  • [2024] Large Multimodal Agents: A Survey. [paper]

  • [2024] A Survey on Integration of Large Language Models with Intelligent Robots. [paper]

  • [2024] Vision-Language Models for Vision Tasks: A Survey. [paper]

  • [2024] A Survey of Embodied Learning for Object-Centric Robotic Manipulation [paper]

  • [2024] Vision-language navigation: a survey and taxonomy [paper]

  • [2023] Toward general-purpose robots via foundation models: A survey and meta-analysis. [paper]

  • [2023] Robot learning in the era of foundation models: A survey. [paper]

Reference

License

Copyright (c) 双愚. All rights reserved.

Licensed under the MIT License.


微信公众号:【具身智能产学研】(Embodied_AI_Study) 最新具身智能学术研究和行业动态咨询,欢迎关注~

image

About

🔥具身智能VLA 技术栈学习路线 | SOTA方法,代码,论文,数据集等

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published