Skip to content

Seunghoon-Yi/21-SLT-project

Repository files navigation

21-SLT-project

MLVU Final project : Sign Language Translation(SLT) Transformer

Project Introduction

This repository contains implementation of my research based on the work during the 2021 Machine Larning for Visual Unerstanding course. You can see the final presentation and report which introduces our base model(seq2seq+attention).

Block diagram of our model


To improve speed and achieve keypoit-free SLT, We constructed our model with three components.

  • Lightweight Video encoder
    3d CNN(C3D, R(2+1)D, S3D)s to encode the video, returns the encoded features.
  • Transformer Encoder
    Contextualizes the input features and returns glosses, which are word-level elements of the sign language.
  • Sign Language Decoder
    The transformer decoder learn the relations between glosses and spoken language, and returns the translated text.
    The model is trained with RWTH-PHOENIX-Weather 2014 dataset which contains sign language videos from 5~30 seconds, and recorded a BLEU-4 score of 14.45.

Requirements

  • cuda >= 11.3
  • python >= 3.7
  • pytorch >= 1.8.1
    • pytorch-model-summary
  • torchvision >= 0.9.1
  • tqdm

to run training, run as python train.py after downloading the dataset to your working directory.

About

Homeworks, supplements, and project codes.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages