Skip to content

Commit 59d7384

Browse files
authored
Merge branch 'master' into feature/comet-logger-update
2 parents 8108320 + 06a8d5b commit 59d7384

File tree

13 files changed

+18
-18
lines changed

13 files changed

+18
-18
lines changed

Diff for: .github/workflows/call-clear-cache.yml

+4-4
Original file line numberDiff line numberDiff line change
@@ -23,18 +23,18 @@ on:
2323
jobs:
2424
cron-clear:
2525
if: github.event_name == 'schedule' || github.event_name == 'pull_request'
26-
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
26+
uses: Lightning-AI/utilities/.github/workflows/[email protected].8
2727
with:
28-
scripts-ref: v0.11.7
28+
scripts-ref: v0.11.8
2929
dry-run: ${{ github.event_name == 'pull_request' }}
3030
pattern: "latest|docs"
3131
age-days: 7
3232

3333
direct-clear:
3434
if: github.event_name == 'workflow_dispatch' || github.event_name == 'pull_request'
35-
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
35+
uses: Lightning-AI/utilities/.github/workflows/[email protected].8
3636
with:
37-
scripts-ref: v0.11.7
37+
scripts-ref: v0.11.8
3838
dry-run: ${{ github.event_name == 'pull_request' }}
3939
pattern: ${{ inputs.pattern || 'pypi_wheels' }} # setting str in case of PR / debugging
4040
age-days: ${{ fromJSON(inputs.age-days) || 0 }} # setting 0 in case of PR / debugging

Diff for: .github/workflows/ci-check-md-links.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ on:
1414

1515
jobs:
1616
check-md-links:
17-
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
17+
uses: Lightning-AI/utilities/.github/workflows/[email protected].8
1818
with:
1919
config-file: ".github/markdown-links-config.json"
2020
base-branch: "master"

Diff for: .github/workflows/ci-schema.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ on:
88

99
jobs:
1010
check:
11-
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
11+
uses: Lightning-AI/utilities/.github/workflows/[email protected].8
1212
with:
1313
# skip azure due to the wrong schema file by MSFT
1414
# https://github.com/Lightning-AI/lightning-flash/pull/1455#issuecomment-1244793607

Diff for: .pre-commit-config.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ repos:
5858
#args: ["--write-changes"] # uncomment if you want to get automatic fixing
5959

6060
- repo: https://github.com/PyCQA/docformatter
61-
rev: v1.7.5
61+
rev: 06907d0267368b49b9180eed423fae5697c1e909 # todo: fix for docformatter after last 1.7.5
6262
hooks:
6363
- id: docformatter
6464
additional_dependencies: [tomli]

Diff for: _notebooks

Diff for: docs/source-pytorch/accelerators/tpu_advanced.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Example:
5252
model = WeightSharingModule()
5353
trainer = Trainer(max_epochs=1, accelerator="tpu")
5454
55-
See `XLA Documentation <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#xla-tensor-quirks>`_
55+
See `XLA Documentation <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md#xla-tensor-quirks>`_
5656

5757
----
5858

@@ -61,4 +61,4 @@ XLA
6161
XLA is the library that interfaces PyTorch with the TPUs.
6262
For more information check out `XLA <https://github.com/pytorch/xla>`_.
6363

64-
Guide for `troubleshooting XLA <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md>`_
64+
Guide for `troubleshooting XLA <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md>`_

Diff for: docs/source-pytorch/accelerators/tpu_basic.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ There are cases in which training on TPUs is slower when compared with GPUs, for
108108
- XLA Graph compilation during the initial steps `Reference <https://github.com/pytorch/xla/issues/2383#issuecomment-666519998>`_
109109
- Some tensor ops are not fully supported on TPU, or not supported at all. These operations will be performed on CPU (context switch).
110110

111-
The official PyTorch XLA `performance guide <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#known-performance-caveats>`_
111+
The official PyTorch XLA `performance guide <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md#known-performance-caveats>`_
112112
has more detailed information on how PyTorch code can be optimized for TPU. In particular, the
113-
`metrics report <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#get-a-metrics-report>`_ allows
113+
`metrics report <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md#get-a-metrics-report>`_ allows
114114
one to identify operations that lead to context switching.

Diff for: docs/source-pytorch/accelerators/tpu_faq.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ A lot of PyTorch operations aren't lowered to XLA, which could lead to significa
7878
These operations are moved to the CPU memory and evaluated, and then the results are transferred back to the XLA device(s).
7979
By using the `xla_debug` Strategy, users could create a metrics report to diagnose issues.
8080

81-
The report includes things like (`XLA Reference <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#troubleshooting>`_):
81+
The report includes things like (`XLA Reference <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md#troubleshooting>`_):
8282

8383
* how many times we issue XLA compilations and time spent on issuing.
8484
* how many times we execute and time spent on execution

Diff for: docs/source-pytorch/upgrade/sections/2_0_regular.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
- Then
77
- Ref
88

9-
* - used PyTorch 3.11
9+
* - used PyTorch 1.11
1010
- upgrade to PyTorch 2.1 or higher
1111
- `PR18691`_
1212

Diff for: src/lightning/fabric/strategies/deepspeed.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -598,7 +598,7 @@ def _initialize_engine(
598598
) -> Tuple["DeepSpeedEngine", Optimizer]:
599599
"""Initialize one model and one optimizer with an optional learning rate scheduler.
600600
601-
This calls :func:`deepspeed.initialize` internally.
601+
This calls ``deepspeed.initialize`` internally.
602602
603603
"""
604604
import deepspeed

Diff for: src/lightning/fabric/strategies/xla_fsdp.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ class XLAFSDPStrategy(ParallelStrategy, _Sharded):
5656
5757
.. warning:: This is an :ref:`experimental <versioning:Experimental API>` feature.
5858
59-
For more information check out https://github.com/pytorch/xla/blob/master/docs/fsdp.md
59+
For more information check out https://github.com/pytorch/xla/blob/v2.5.0/docs/fsdp.md
6060
6161
Args:
6262
auto_wrap_policy: Same as ``auto_wrap_policy`` parameter in

Diff for: src/lightning/pytorch/strategies/deepspeed.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -414,7 +414,7 @@ def _setup_model_and_optimizer(
414414
) -> Tuple["deepspeed.DeepSpeedEngine", Optimizer]:
415415
"""Initialize one model and one optimizer with an optional learning rate scheduler.
416416
417-
This calls :func:`deepspeed.initialize` internally.
417+
This calls ``deepspeed.initialize`` internally.
418418
419419
"""
420420
import deepspeed

Diff for: src/version.info

+1-1
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
2.4.0
1+
2.5.0.dev

0 commit comments

Comments
 (0)