Skip to content

Add lightweight PDB (Protein Data Bank) file support#7926

Open
behroozazarkhalili wants to merge 4 commits intohuggingface:mainfrom
behroozazarkhalili:feat/pdb-support
Open

Add lightweight PDB (Protein Data Bank) file support#7926
behroozazarkhalili wants to merge 4 commits intohuggingface:mainfrom
behroozazarkhalili:feat/pdb-support

Conversation

@behroozazarkhalili
Copy link

@behroozazarkhalili behroozazarkhalili commented Dec 31, 2025

Summary

This PR adds support for loading PDB (Protein Data Bank) files with load_dataset(), following the ImageFolder pattern where one row = one structure.

Based on feedback from @lhoestq in #7930, this approach makes datasets more practical for ML workflows:

  • Each row is independent, enabling train/test splits and shuffling
  • Easy to add labels (folder-based) and metadata (metadata.jsonl)
  • Compatible with Dataset Viewer (one 3D render per row)

Architecture

Uses FolderBasedBuilder pattern (like ImageFolder, AudioFolder):

class PdbFolder(FolderBasedBuilder):
    BASE_FEATURE = ProteinStructure
    BASE_COLUMN_NAME = "structure"
    EXTENSIONS = [".pdb", ".ent"]

New ProteinStructure Feature Type

# Arrow schema for lazy loading
pa.struct({"bytes": pa.binary(), "path": pa.string()})

# Decoded: returns structure file content as string
dataset = load_dataset("pdb", data_dir="structures/")
print(dataset[0]["structure"])  # Full PDB file content

Supported Extensions

.pdb, .ent

Usage

from datasets import load_dataset

# Load from directory
dataset = load_dataset("pdb", data_dir="protein_structures/")

# Load with folder-based labels
# structures/
#   enzymes/
#     1abc.pdb
#   receptors/
#     2def.pdb
dataset = load_dataset("pdb", data_dir="structures/")
print(dataset[0])  # {"structure": "HEADER...", "label": "enzymes"}

# Load with metadata
# structures/
#   1abc.pdb
#   metadata.jsonl  # {"file_name": "1abc.pdb", "resolution": 2.5}
dataset = load_dataset("pdb", data_dir="structures/")
print(dataset[0])  # {"structure": "HEADER...", "resolution": 2.5}

# Drop labels or metadata
dataset = load_dataset("pdb", data_dir="structures/", drop_labels=True)
dataset = load_dataset("pdb", data_dir="structures/", drop_metadata=True)

Test Results

All 28 PDB tests + 15 ProteinStructure feature tests pass.

Related PRs

cc @lhoestq @georgia-hf

- Add zero-dependency pure Python parser for PDB format
- Support ATOM and HETATM record types with configurable filtering
- Handle fixed-width column parsing per official PDB specification
- Support gzip, bzip2, and xz compression via magic bytes detection
- Support .pdb and .ent file extensions
- Add comprehensive test suite with 24 tests
- Add documentation to loading.mdx

Columns include: atom_serial, atom_name, residue_name, chain_id,
residue_seq, x, y, z, occupancy, temp_factor, element, and more.

Part of the bioinformatics file format support series.
Based on reviewer feedback from @lhoestq, refactor the PDB loader to follow
the ImageFolder pattern where each row contains one complete protein structure
file, rather than one row per atom.

Changes:
- Add ProteinStructure feature type for lazy loading structure files
- Refactor PdbFolder to use FolderBasedBuilder with folder/metadata support
- Support automatic label inference from directory names
- Support metadata files (CSV/JSONL) for additional annotations
- Simplify from ~400 lines atom-parser to ~55 lines folder-based builder

This approach matches how researchers typically work with protein structures:
- One structure = one data point
- Supports downstream tools (BioPython, PyMOL, MDAnalysis)
- Consistent with ImageFolder/AudioFolder/VideoFolder patterns

Test: 43 tests passing (15 feature + 28 builder tests)
Fix bug in FolderBasedBuilder._generate_examples where drop_metadata=True
would fail with IndexError when metadata files were included in the files
list but skipped due to extension filtering.

Root cause: enumerate(files) created gaps in shard_idx when files were
skipped, causing builder.py to fail when indexing original_shard_lengths.

Solution: Use separate valid_shard_idx counter that only increments when
samples are actually yielded, ensuring contiguous shard IDs.
- Sort imports alphabetically in features.py
- Fix line length in protein_structure.py error messages
- Format function call in test_pdb.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant