Skip to content

Commit b7074d2

Browse files
Github action: auto-update.
1 parent 3e605a3 commit b7074d2

File tree

53 files changed

+1378
-275
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

53 files changed

+1378
-275
lines changed
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
69 Bytes
Loading
1.89 KB
Loading
-382 Bytes
Loading
-1.02 KB
Loading

dev/_modules/index.html

+2-1
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ <h1>All modules for which code is available</h1>
123123
<li><a href="neuralop/layers/embeddings.html">neuralop.layers.embeddings</a></li>
124124
<li><a href="neuralop/layers/gno_block.html">neuralop.layers.gno_block</a></li>
125125
<li><a href="neuralop/layers/integral_transform.html">neuralop.layers.integral_transform</a></li>
126-
<li><a href="neuralop/layers/local_fno_block.html">neuralop.layers.local_fno_block</a></li>
126+
<li><a href="neuralop/layers/local_no_block.html">neuralop.layers.local_no_block</a></li>
127127
<li><a href="neuralop/layers/neighbor_search.html">neuralop.layers.neighbor_search</a></li>
128128
<li><a href="neuralop/layers/padding.html">neuralop.layers.padding</a></li>
129129
<li><a href="neuralop/layers/skip_connections.html">neuralop.layers.skip_connections</a></li>
@@ -133,6 +133,7 @@ <h1>All modules for which code is available</h1>
133133
<li><a href="neuralop/models/base_model.html">neuralop.models.base_model</a></li>
134134
<li><a href="neuralop/models/fno.html">neuralop.models.fno</a></li>
135135
<li><a href="neuralop/models/gino.html">neuralop.models.gino</a></li>
136+
<li><a href="neuralop/models/local_no.html">neuralop.models.local_no</a></li>
136137
<li><a href="neuralop/models/uno.html">neuralop.models.uno</a></li>
137138
<li><a href="neuralop/training/incremental.html">neuralop.training.incremental</a></li>
138139
<li><a href="neuralop/training/trainer.html">neuralop.training.trainer</a></li>

dev/_modules/neuralop/layers/local_fno_block.html dev/_modules/neuralop/layers/local_no_block.html

+35-35
Large diffs are not rendered by default.

dev/_modules/neuralop/models/local_no.html

+594
Large diffs are not rendered by default.

dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt

+2-2
Original file line numberDiff line numberDiff line change
@@ -357,7 +357,7 @@ in order to compute the convolved image, we need to first bring it into the righ
357357
.. code-block:: none
358358
359359
360-
<matplotlib.colorbar.Colorbar object at 0x7f4d01c0f110>
360+
<matplotlib.colorbar.Colorbar object at 0x7f48a2bb3390>
361361
362362
363363
@@ -448,7 +448,7 @@ in order to compute the convolved image, we need to first bring it into the righ
448448
449449
.. rst-class:: sphx-glr-timing
450450

451-
**Total running time of the script:** (0 minutes 30.901 seconds)
451+
**Total running time of the script:** (0 minutes 30.716 seconds)
452452

453453

454454
.. _sphx_glr_download_auto_examples_plot_DISCO_convolutions.py:

dev/_sources/auto_examples/plot_FNO_darcy.rst.txt

+9-9
Original file line numberDiff line numberDiff line change
@@ -248,13 +248,13 @@ Training the model
248248
)
249249
250250
### SCHEDULER ###
251-
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f4d14dd25d0>
251+
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f48b5ded1d0>
252252
253253
### LOSSES ###
254254
255-
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f4d14dd34d0>
255+
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f48b5dedbd0>
256256
257-
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f4d14dd34d0>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f4d14dd0cd0>}
257+
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f48b5dedbd0>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f48b5dece10>}
258258
259259
260260
@@ -311,22 +311,22 @@ Then train the model on our small Darcy-Flow dataset:
311311
Training on 1000 samples
312312
Testing on [50, 50] samples on resolutions [16, 32].
313313
Raw outputs of shape torch.Size([32, 1, 16, 16])
314-
[0] time=2.51, avg_loss=0.6956, train_err=21.7383
314+
[0] time=2.48, avg_loss=0.6956, train_err=21.7383
315315
Eval: 16_h1=0.4298, 16_l2=0.3487, 32_h1=0.5847, 32_l2=0.3542
316-
[3] time=2.46, avg_loss=0.2103, train_err=6.5705
316+
[3] time=2.47, avg_loss=0.2103, train_err=6.5705
317317
Eval: 16_h1=0.2030, 16_l2=0.1384, 32_h1=0.5075, 32_l2=0.1774
318-
[6] time=2.46, avg_loss=0.1911, train_err=5.9721
318+
[6] time=2.47, avg_loss=0.1911, train_err=5.9721
319319
Eval: 16_h1=0.2099, 16_l2=0.1374, 32_h1=0.4907, 32_l2=0.1783
320320
[9] time=2.47, avg_loss=0.1410, train_err=4.4073
321321
Eval: 16_h1=0.2052, 16_l2=0.1201, 32_h1=0.5268, 32_l2=0.1615
322322
[12] time=2.47, avg_loss=0.1422, train_err=4.4434
323323
Eval: 16_h1=0.2131, 16_l2=0.1285, 32_h1=0.5413, 32_l2=0.1741
324-
[15] time=2.46, avg_loss=0.1198, train_err=3.7424
324+
[15] time=2.47, avg_loss=0.1198, train_err=3.7424
325325
Eval: 16_h1=0.1984, 16_l2=0.1137, 32_h1=0.5255, 32_l2=0.1569
326326
[18] time=2.47, avg_loss=0.1104, train_err=3.4502
327327
Eval: 16_h1=0.2039, 16_l2=0.1195, 32_h1=0.5062, 32_l2=0.1603
328328
329-
{'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.4833866669999907}
329+
{'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.4658696660000032}
330330
331331
332332
@@ -476,7 +476,7 @@ are other ways to scale the outputs of the FNO to train a true super-resolution
476476

477477
.. rst-class:: sphx-glr-timing
478478

479-
**Total running time of the script:** (0 minutes 50.700 seconds)
479+
**Total running time of the script:** (0 minutes 50.696 seconds)
480480

481481

482482
.. _sphx_glr_download_auto_examples_plot_FNO_darcy.py:

dev/_sources/auto_examples/plot_SFNO_swe.rst.txt

+19-19
Original file line numberDiff line numberDiff line change
@@ -234,13 +234,13 @@ Creating the losses
234234
)
235235
236236
### SCHEDULER ###
237-
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f4d14dad940>
237+
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f48b5dd5940>
238238
239239
### LOSSES ###
240240
241-
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7f4d14daf380>
241+
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7f48b5dd7380>
242242
243-
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f4d14daf380>}
243+
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f48b5dd7380>}
244244
245245
246246
@@ -297,22 +297,22 @@ Train the model on the spherical SWE dataset
297297
Training on 200 samples
298298
Testing on [50, 50] samples on resolutions [(32, 64), (64, 128)].
299299
Raw outputs of shape torch.Size([4, 3, 32, 64])
300-
[0] time=3.52, avg_loss=2.5525, train_err=10.2100
301-
Eval: (32, 64)_l2=1.6327, (64, 128)_l2=2.5739
302-
[3] time=3.50, avg_loss=0.3994, train_err=1.5974
303-
Eval: (32, 64)_l2=0.6303, (64, 128)_l2=2.5415
304-
[6] time=3.41, avg_loss=0.2746, train_err=1.0982
305-
Eval: (32, 64)_l2=0.4479, (64, 128)_l2=2.4590
306-
[9] time=3.41, avg_loss=0.2362, train_err=0.9447
307-
Eval: (32, 64)_l2=0.3426, (64, 128)_l2=2.4463
308-
[12] time=3.41, avg_loss=0.2061, train_err=0.8244
309-
Eval: (32, 64)_l2=0.3944, (64, 128)_l2=2.4336
310-
[15] time=3.40, avg_loss=0.1675, train_err=0.6701
311-
Eval: (32, 64)_l2=0.3104, (64, 128)_l2=2.4362
312-
[18] time=3.44, avg_loss=0.1469, train_err=0.5876
313-
Eval: (32, 64)_l2=0.2370, (64, 128)_l2=2.4253
300+
[0] time=3.51, avg_loss=2.6651, train_err=10.6606
301+
Eval: (32, 64)_l2=2.0387, (64, 128)_l2=2.3833
302+
[3] time=3.50, avg_loss=0.3967, train_err=1.5867
303+
Eval: (32, 64)_l2=0.4690, (64, 128)_l2=2.6671
304+
[6] time=3.45, avg_loss=0.2665, train_err=1.0661
305+
Eval: (32, 64)_l2=0.4195, (64, 128)_l2=2.5312
306+
[9] time=3.46, avg_loss=0.2292, train_err=0.9166
307+
Eval: (32, 64)_l2=0.3467, (64, 128)_l2=2.5203
308+
[12] time=3.43, avg_loss=0.1856, train_err=0.7426
309+
Eval: (32, 64)_l2=0.3028, (64, 128)_l2=2.4369
310+
[15] time=3.48, avg_loss=0.1537, train_err=0.6149
311+
Eval: (32, 64)_l2=0.2960, (64, 128)_l2=2.4150
312+
[18] time=3.46, avg_loss=0.1403, train_err=0.5613
313+
Eval: (32, 64)_l2=0.2902, (64, 128)_l2=2.4013
314314
315-
{'train_err': 0.5793274295330048, 'avg_loss': 0.1448318573832512, 'avg_lasso_loss': None, 'epoch_train_time': 3.4297402870000155}
315+
{'train_err': 0.5318634355068207, 'avg_loss': 0.13296585887670517, 'avg_lasso_loss': None, 'epoch_train_time': 3.4797961209999926}
316316
317317
318318
@@ -383,7 +383,7 @@ In practice we would train a Neural Operator on one or multiple GPUs
383383

384384
.. rst-class:: sphx-glr-timing
385385

386-
**Total running time of the script:** (1 minutes 23.885 seconds)
386+
**Total running time of the script:** (1 minutes 24.697 seconds)
387387

388388

389389
.. _sphx_glr_download_auto_examples_plot_SFNO_swe.py:

dev/_sources/auto_examples/plot_UNO_darcy.rst.txt

+19-19
Original file line numberDiff line numberDiff line change
@@ -345,13 +345,13 @@ Creating the losses
345345
)
346346
347347
### SCHEDULER ###
348-
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f4d0fd5c7d0>
348+
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f48b4d44a50>
349349
350350
### LOSSES ###
351351
352-
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f4d14e7f380>
352+
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f48b5ea7380>
353353
354-
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f4d14e7f380>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f4d0fd5ce10>}
354+
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f48b5ea7380>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f48b4d447d0>}
355355
356356
357357
@@ -410,22 +410,22 @@ Actually train the model on our small Darcy-Flow dataset
410410
Training on 1000 samples
411411
Testing on [50, 50] samples on resolutions [16, 32].
412412
Raw outputs of shape torch.Size([32, 1, 16, 16])
413-
[0] time=10.01, avg_loss=0.6591, train_err=20.5974
414-
Eval: 16_h1=0.4321, 16_l2=0.3189, 32_h1=0.9655, 32_l2=0.5314
415-
[3] time=9.93, avg_loss=0.2503, train_err=7.8226
416-
Eval: 16_h1=0.2496, 16_l2=0.1610, 32_h1=0.8357, 32_l2=0.5342
417-
[6] time=9.97, avg_loss=0.2127, train_err=6.6463
418-
Eval: 16_h1=0.2489, 16_l2=0.1675, 32_h1=0.7890, 32_l2=0.5255
419-
[9] time=9.99, avg_loss=0.1983, train_err=6.1968
420-
Eval: 16_h1=0.2541, 16_l2=0.1658, 32_h1=0.7831, 32_l2=0.5347
421-
[12] time=9.98, avg_loss=0.1890, train_err=5.9059
422-
Eval: 16_h1=0.2446, 16_l2=0.1492, 32_h1=0.7882, 32_l2=0.4716
423-
[15] time=9.97, avg_loss=0.1725, train_err=5.3913
424-
Eval: 16_h1=0.3038, 16_l2=0.2021, 32_h1=0.7797, 32_l2=0.5113
425-
[18] time=10.03, avg_loss=0.1466, train_err=4.5810
426-
Eval: 16_h1=0.2422, 16_l2=0.1467, 32_h1=0.7684, 32_l2=0.4630
413+
[0] time=10.02, avg_loss=0.6185, train_err=19.3296
414+
Eval: 16_h1=0.5203, 16_l2=0.3428, 32_h1=0.9230, 32_l2=0.6163
415+
[3] time=9.96, avg_loss=0.2452, train_err=7.6623
416+
Eval: 16_h1=0.2633, 16_l2=0.1671, 32_h1=0.8157, 32_l2=0.5749
417+
[6] time=9.96, avg_loss=0.2192, train_err=6.8498
418+
Eval: 16_h1=0.3232, 16_l2=0.2005, 32_h1=0.8231, 32_l2=0.5463
419+
[9] time=10.01, avg_loss=0.2100, train_err=6.5640
420+
Eval: 16_h1=0.2541, 16_l2=0.1552, 32_h1=0.7786, 32_l2=0.5305
421+
[12] time=10.02, avg_loss=0.2080, train_err=6.5005
422+
Eval: 16_h1=0.2947, 16_l2=0.1989, 32_h1=0.7486, 32_l2=0.4690
423+
[15] time=10.02, avg_loss=0.1817, train_err=5.6768
424+
Eval: 16_h1=0.2574, 16_l2=0.1490, 32_h1=0.7561, 32_l2=0.4931
425+
[18] time=9.97, avg_loss=0.1338, train_err=4.1827
426+
Eval: 16_h1=0.2373, 16_l2=0.1454, 32_h1=0.7275, 32_l2=0.4638
427427
428-
{'train_err': 4.47576854750514, 'avg_loss': 0.14322459352016448, 'avg_lasso_loss': None, 'epoch_train_time': 9.971655856999973}
428+
{'train_err': 4.687410641461611, 'avg_loss': 0.14999714052677154, 'avg_lasso_loss': None, 'epoch_train_time': 9.965890896000019}
429429
430430
431431
@@ -499,7 +499,7 @@ In practice we would train a Neural Operator on one or multiple GPUs
499499

500500
.. rst-class:: sphx-glr-timing
501501

502-
**Total running time of the script:** (3 minutes 22.836 seconds)
502+
**Total running time of the script:** (3 minutes 22.943 seconds)
503503

504504

505505
.. _sphx_glr_download_auto_examples_plot_UNO_darcy.py:

dev/_sources/auto_examples/plot_count_flops.rst.txt

+2-2
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ This output is organized as a defaultdict object that counts the FLOPS used in e
8080

8181
.. code-block:: none
8282
83-
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7f4d14d68680>, {'': defaultdict(<class 'int'>, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
83+
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7f48b5d8c7c0>, {'': defaultdict(<class 'int'>, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
8484
8585
8686
@@ -125,7 +125,7 @@ To check the maximum FLOPS used during the forward pass, let's create a recursiv
125125
126126
.. rst-class:: sphx-glr-timing
127127

128-
**Total running time of the script:** (0 minutes 3.129 seconds)
128+
**Total running time of the script:** (0 minutes 3.106 seconds)
129129

130130

131131
.. _sphx_glr_download_auto_examples_plot_count_flops.py:

dev/_sources/auto_examples/plot_darcy_flow.rst.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,7 @@ Visualizing the data
163163
164164
.. rst-class:: sphx-glr-timing
165165

166-
**Total running time of the script:** (0 minutes 0.314 seconds)
166+
**Total running time of the script:** (0 minutes 0.310 seconds)
167167

168168

169169
.. _sphx_glr_download_auto_examples_plot_darcy_flow.py:

dev/_sources/auto_examples/plot_darcy_flow_spectrum.rst.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -219,7 +219,7 @@ Loading the Navier-Stokes dataset in 128x128 resolution
219219
220220
.. rst-class:: sphx-glr-timing
221221

222-
**Total running time of the script:** (0 minutes 0.166 seconds)
222+
**Total running time of the script:** (0 minutes 0.185 seconds)
223223

224224

225225
.. _sphx_glr_download_auto_examples_plot_darcy_flow_spectrum.py:

0 commit comments

Comments
 (0)