Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LoRA and DoRA PEFT support for Fine-Tuning TimesFM #104

Merged
merged 24 commits into from
Aug 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
71d9802
add parameter efficient finetuning pipeline
tanmayshishodia Jul 16, 2024
ee7462b
Merge branch 'master' into feature/lora
tanmayshishodia Jul 16, 2024
b6ebd80
revert test env name
tanmayshishodia Jul 16, 2024
77b004b
Merge branch 'feature/lora' of github.com:tanmayshishodia/timesfm int…
tanmayshishodia Jul 16, 2024
461c2cd
update checkpoint dir name
tanmayshishodia Jul 16, 2024
c34896b
update adapter init file docstring
tanmayshishodia Jul 16, 2024
c8aaf31
gitgnore all pycache dirs
tanmayshishodia Jul 16, 2024
845661d
update usage tutorial
tanmayshishodia Jul 16, 2024
e3fb45c
gitignore jax egg info
tanmayshishodia Jul 16, 2024
2174a8c
add src init file for poetry package
tanmayshishodia Jul 16, 2024
39665af
change import style
tanmayshishodia Jul 16, 2024
a59979d
add example dora.sh file
tanmayshishodia Jul 16, 2024
5ae8c7d
update lora/dora intermediate var names
tanmayshishodia Jul 17, 2024
d4d4afd
add pytest framework
tanmayshishodia Jul 17, 2024
5901805
add bash scripts for running diff FT strategies
tanmayshishodia Jul 17, 2024
a908448
add docstrings in adapter utils
tanmayshishodia Jul 17, 2024
18da73a
remove helper and fix early stopping logic
tanmayshishodia Jul 18, 2024
807ddfd
add poetry packages
tanmayshishodia Aug 3, 2024
f15daba
Merge branch 'master' into feature/lora
tanmayshishodia Aug 3, 2024
d72ff83
keep only a single bash script
tanmayshishodia Aug 4, 2024
6517590
update poetry lock
tanmayshishodia Aug 4, 2024
0ccc10f
update pytest poetry
tanmayshishodia Aug 4, 2024
e5be6bd
add new line EOF
tanmayshishodia Aug 4, 2024
55f71de
Create PEFT README.md
tanmayshishodia Aug 4, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
.venv/
dist/
**__pycache__/** */
__pycache__/
checkpoints/
wandb/
datasets/
results/
timesfm_jax.egg-info/
3 changes: 3 additions & 0 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,6 @@ dependencies:
- jax[cuda12]==0.4.26
- einshape
- scikit-learn
- typer
- wandb
- pytest
3 changes: 3 additions & 0 deletions environment_cpu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,6 @@ dependencies:
- jax[cpu]==0.4.26
- einshape
- scikit-learn
- typer
- wandb
- pytest
42 changes: 42 additions & 0 deletions peft/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Fine-Tuning Pipeline

This folder contains a generic fine-tuning pipeline designed to support multiple PEFT fine-tuning strategies.

## Features

- **Supported Fine-Tuning Strategies**:
- **Full Fine-Tuning**: Adjusts all parameters of the model during training.
- **[Linear Probing](https://arxiv.org/abs/2302.11939)**: Fine-tunes only the residual blocks and the embedding layer, leaving other parameters unchanged.
- **[LoRA (Low-Rank Adaptation)](https://arxiv.org/abs/2106.09685)**: A memory-efficient method that fine-tunes a small number of parameters by decomposing the weight matrices into low-rank matrices.
- **[DoRA (Directional LoRA)](https://arxiv.org/abs/2402.09353v4)**: An extension of LoRA that decomposes pre-trained weights into magnitude and direction components. It uses LoRA for directional adaptation, enhancing learning capacity and stability without additional inference overhead.

## Usage
### Fine-Tuning Script
The provided finetune.py script allows you to fine-tune a model with specific configurations. You can customize various parameters to suit your dataset and desired fine-tuning strategy.

Example Usage:

```zsh
source finetune.sh
```
This script runs the finetune.py file with a predefined set of hyperparameters for the model. You can adjust the parameters in the script as needed.

### Available Options
Run the script with the --help flag to see a full list of available options and their descriptions:
```zsh
python3 finetune.py --help
```
Script Configuration
You can modify the following key parameters directly in the finetune.sh script:
Fine-Tuning Strategy: Toggle between full fine-tuning, LoRA \[`--use-lora`\], DoRA [\[`--use-dora`\]], or Linear Probing \[`--use-linear-probing`\].

### Performance Comparison
The figure below compares the performance of LoRA/DoRA against Linear Probing under the following conditions:

<img width="528" alt="image" src="https://github.com/user-attachments/assets/6c9f820b-5865-4821-8014-c346b9d632a5">

- Training data split: 60% train, 20% validation, 20% test.
- Benchmark: context_len=128, horizon_len=96
- Fine-tuning: context_len=128, horizon_len=128
- Black: Best result.
- Blue: Second best result.
Loading
Loading