Skip to content

Bug with T511b inference #43

Open
Open
@ZeyiLiao

Description

@ZeyiLiao

How to reproduce

from transformers import AutoModelForSeq2SeqLM, AutoTokenizer,AutoModelForCausalLM
from parallelformers import parallelize
model = AutoModelForCausalLM.from_pretrained('EleutherAI/gpt-neo-2.7B')
parallelize(model, num_gpus=4, fp16 = False)

Environment

  • OS : 18.04.4 LTS (Bionic Beaver) Ubuntu
  • Python version : 3.7.3
  • Transformers version : 4.22.1
  • Whether to use Docker: No
  • Misc.: N/A

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions