Skip to content
View BolinLai's full-sized avatar

Block or report BolinLai

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
BolinLai/README.md

Hi there 👋

Welcome to my GitHub!

I'm a PhD student at Georgia Tech and a visiting student at UIUC.

My research interests lie in Multi-Modal Learning, Generative Models (including Multimodal LLMs and Diffusion Models) and Video Understanding. More details can be found in my homepage.

Please drop me an email if you have questions about my work or have interests in collaboration!

Pinned Loading

  1. LEGO LEGO Public

    [ECCV2024, Oral, Best Paper Finalist]This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction Tuning".

    Python 31

  2. CSTS CSTS Public

    [ECCV2024] The official implementation of "Listen to Look into the Future: Audio-Visual Egocentric Gaze Anticipation".

    Python 5

  3. GLC GLC Public

    [BMVC2022, IJCV2023, Best Student Paper, Spotlight] Official codes for the paper "In the Eye of Transformer: Global-Local Correlation for Egocentric Gaze Estimation".

    Python 20 3

  4. SALT-NLP/PersuationGames SALT-NLP/PersuationGames Public

    [ACL2023, Findings] Source codes for the paper "Werewolf Among Us: Multimodal Resources for Modeling Persuasion Behaviors in Social Deduction Games"

    Python 12 3