Skip to content
/ shap-e Public
forked from openai/shap-e

Generate 3D objects conditioned on text or images

License

Notifications You must be signed in to change notification settings

zoshua/shap-e

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Shap-E

This is the official code and model release for Shap-E: Generating Conditional 3D Implicit Functions.

  • See Usage for guidance on how to use this repository.
  • See Samples for examples of what our text-conditional model can generate.

Samples

Here are some highlighted samples from our text-conditional model. For random samples on selected prompts, see samples.md.

A chair that looks like an avocado An airplane that looks like a banana A spaceship
A chair that looks
like an avocado
An airplane that looks
like a banana
A spaceship
A birthday cupcake A chair that looks like a tree A green boot
A birthday cupcake A chair that looks
like a tree
A green boot
A penguin Ube ice cream cone A bowl of vegetables
A penguin Ube ice cream cone A bowl of vegetables

Usage

Install with pip install -e ..

To get started with examples, see the following notebooks:

  • sample_text_to_3d.ipynb - sample a 3D model, conditioned on a text prompt.
  • sample_image_to_3d.ipynb - sample a 3D model, conditioned on a synthetic view image. To get the best result, you should remove background from the input image.
  • encode_model.ipynb - loads a 3D model or a trimesh, creates a batch of multiview renders and a point cloud, encodes them into a latent, and renders it back. For this to work, install Blender version 3.3.1 or higher, and set the environment variable BLENDER_PATH to the path of the Blender executable.

About

Generate 3D objects conditioned on text or images

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.3%
  • Jupyter Notebook 1.7%