deps: Update all non-major dependencies #41
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==0.20.3
->==0.21.0
==3.8.4
->==3.8.5
==1.12.0
->==1.14.0
==4.0.2
->==4.0.3
==0.2.2
->==0.3.2
~=0.2.2
->~=0.3.2
==2023.5.7
->==2023.7.22
==5.1.0
->==5.2.0
==3.1.0
->==3.2.0
==0.3.26
->==0.4.5
~=0.3.26
->~=0.4.5
==8.1.3
->==8.1.6
==0.6.5
->==0.6.8
==41.0.1
->==41.0.3
==0.5.9
->==0.5.14
==2.13.1
->==2.14.4
==0.3.6
->==0.3.7
==0.99.1
->==0.101.0
==1.3.3
->==1.4.0
==3.1.31
->==3.1.32
==0.16.3
->==0.17.3
==0.5.0
->==0.6.0
==0.23.3
->==0.24.1
==0.16.2
->==0.16.4
==6.7.0
->==6.8.0
==1.3.1
->==1.3.2
==4.18.0
->==4.19.0
==2023.6.1
->==2023.7.1
==0.0.225
->==0.0.262
~=0.0.225
->~=0.0.262
==3.4.3
->==3.4.4
==3.19.0
->==3.20.1
==0.70.14
->==0.70.15
==2.8.4
->==2.8.5
==1.23.5
->==1.25.2
==7.3.1
->==7.4.0
==3.20.0
->==3.20.3
==1.10.11
->==1.10.12
==2.15.1
->==2.16.1
3.10.0
->3.11.4
==6.0
->==6.0.1
==0.29.1
->==0.30.2
==2023.6.3
->==2023.8.8
==13.0.1
->==13.5.2
==0.8.7
->==0.9.2
==0.3.1
->==0.3.2
==2.0.18
->==2.0.19
==0.27.0
->==0.31.0
==1.24.0
->==1.25.0
==3.1.0
->==3.2.0
==4.65.0
->==4.66.1
==4.30.2
->==4.31.0
~=4.30.2
->~=4.31.0
==0.7.0
->==0.9.0
==0.7.12
->==0.9.2
~=0.7.12
->~=0.9.2
==2.0.3
->==2.0.4
==0.22.0
->==0.23.2
==0.20.0
->==0.21.2
==1.14.1
->==1.15.0
==3.2.0
->==3.3.0
==3.15.0
->==3.16.2
Release Notes
huggingface/accelerate (accelerate)
v0.21.0
: : Model quantization and NPUsCompare Source
Model quantization with bitsandbytes
You can now quantize any model (no just Transformer models) using Accelerate. This is mainly for models having a lot of linear layers. See the documentation for more information!
Support for Ascend NPUs
Accelerate now supports Ascend NPUs.
What's new?
Accelerate now requires Python 3.8+ and PyTorch 1.10+ :
🚨🚨🚨 Spring cleaning: Python 3.8 🚨🚨🚨 by @muellerzr in #1661
🚨🚨🚨 Spring cleaning: PyTorch 1.10 🚨🚨🚨 by @muellerzr in #1662
[doc build] Use secrets by @mishig25 in #1551
Update launch.mdx by @LiamSwayne in #1553
Avoid double wrapping of all accelerate.prepare objects by @muellerzr in #1555
Update README.md by @LiamSwayne in #1556
Fix load_state_dict when there is one device and disk by @sgugger in #1557
Fix tests not being ran on multi-GPU nightly by @muellerzr in #1558
fix the typo when setting the "_accelerator_prepared" attribute by @Yura52 in #1560
[
core
] Fix possibility to passNoneType
objects inprepare
by @younesbelkada in #1561Reset dataloader end_of_datalaoder at each iter by @sgugger in #1562
Update big_modeling.mdx by @LiamSwayne in #1564
[
bnb
] Fix failing int8 tests by @younesbelkada in #1567Update gradient sync docs to reflect importance of
optimizer.step()
by @dleve123 in #1565Update mixed precision integrations in README by @sgugger in #1569
Raise error instead of warn by @muellerzr in #1568
Introduce listify, fix tensorboard silently failing by @muellerzr in #1570
Check for bak and expand docs on directory structure by @muellerzr in #1571
Perminant solution by @muellerzr in #1577
fix the bug in xpu by @mingxiaoh in #1508
Make sure that we only set is_accelerator_prepared on items accelerate actually prepares by @muellerzr in #1578
Expand
prepare()
doc by @muellerzr in #1580Get Torch version using importlib instead of pkg_resources by @catwell in #1585
improve oob performance when use mpirun to start DDP finetune without
accelerate launch
by @sywangyi in #1575Update training_tpu.mdx by @LiamSwayne in #1582
Return false if CUDA available by @muellerzr in #1581
fix logger level by @caopulan in #1579
Fix test by @muellerzr in #1586
Update checkpoint.mdx by @LiamSwayne in #1587
FSDP updates by @pacman100 in #1576
Update modeling.py by @ain-soph in #1595
Integration tests by @muellerzr in #1593
Add triggers for CI workflow by @muellerzr in #1597
Remove asking xpu plugin for non xpu devices by @abhilash1910 in #1594
Remove GPU safetensors env variable by @sgugger in #1603
reset end_of_dataloader for dataloader_dispatcher by @megavaz in #1609
fix for arc gpus by @abhilash1910 in #1615
Ignore low_zero option when only device is available by @sgugger in #1617
Fix failing multinode tests by @muellerzr in #1616
Doc to md by @sgugger in #1618
Fix tb issue by @muellerzr in #1623
Fix workflow by @muellerzr in #1625
Fix transformers sync bug with accumulate by @muellerzr in #1624
fixes offload dtype by @SunMarc in #1631
fix: Megatron is not installed. please build it from source. by @yuanwu2017 in #1636
deepspeed z2/z1 state_dict bloating fix by @pacman100 in #1638
Swap disable rich by @muellerzr in #1640
fix autocasting bug by @pacman100 in #1637
fix modeling low zero by @abhilash1910 in #1634
Add skorch to runners by @muellerzr in #1646
add save model by @SunMarc in #1641
Change dispatch_model when we have only one device by @SunMarc in #1648
Doc save model by @SunMarc in #1650
Fix device_map by @SunMarc in #1651
Check for port usage before launch by @muellerzr in #1656
[
BigModeling
] Add missing check for quantized models by @younesbelkada in #1652Bump integration by @muellerzr in #1658
TIL by @muellerzr in #1657
docker cpu py version by @muellerzr in #1659
[
BigModeling
] Final fix for dispatch int8 and fp4 models by @younesbelkada in #1660remove safetensor dep on shard_checkpoint by @SunMarc in #1664
change the import place to avoid import error by @pacman100 in #1653
Update broken Runhouse link in examples/README.md by @dongreenberg in #1668
Bnb quantization by @SunMarc in #1626
replace save funct in doc by @SunMarc in #1672
Doc big model inference by @SunMarc in #1670
Add docs for saving Transformers models by @deppen8 in #1671
fix bnb tests by @SunMarc in #1679
Fix workflow CI by @muellerzr in #1690
remove duplicate class by @SunMarc in #1691
update readme in examples by @statelesshz in #1678
Fix nightly tests by @muellerzr in #1696
Fixup docs by @muellerzr in #1697
Improve quality errors by @muellerzr in #1698
Move mixed precision wrapping ahead of DDP/FSDP wrapping by @ChenWu98 in #1682
Add offload for 8-bit model by @SunMarc in #1699
Deepcopy on Accelerator to return self by @muellerzr in #1694
Update tracking.md by @stevhliu in #1702
Skip tests when bnb isn't available by @muellerzr in #1706
Fix launcher validation by @abhilash1910 in #1705
Fixes for issue #1683: failed to run accelerate conf
Configuration
📅 Schedule: Branch creation - "after 12pm on Thursday" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
This PR has been generated by Mend Renovate. View repository job log here.