Releases: ArweaveTeam/arweave
Release 2.9.5-alpha3
This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
This release includes several bug fixes. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you believe one of the listed bug fixes will improve your mining experience.
Support for repack-in-place from the replica.2.9
format
This release introduces support for repack-in-place from replica.2.9
to unpacked
or to a different replica.2.9
address. In addition we've made several performance improvements and fixed a number of edge case bugs which may previously have caused some chunks to be skipped by the repack process.
Performance
Due to how replica.2.9 chunks are processed, the parameters for tuning the repack-in-place performance have changed. There are 4 main considerations:
- Repack footprint size:
replica.2.9
chunks are grouped in footprints of chunks. A full footprint is 1024 chunks distributed evenly across a partition. - Repack batch size: The repack-in-place process reads some number of chunks, repacks them, and then writes them back to disk. The batch size controls how many contiguous chunks are read at once. Previously a batch size of 10 would mean that 10 chunks would be read, repacked, and written. However in order to handle
replica.2.9
data efficiently, a batch size indicates the number of footprints to process at once. So a batch size of 10 means that 10 footprints will be read, repacked, and written. Since a full footprint is 1024 chunks, the amount of memory required to process a batch size of 10 is now 10,240 chunks or roughly 2.5 GiB. - Available RAM: The footprint size and batch size drive how much RAM is required by the repack in place process. And if you're repacking multiple partitions at once, the RAM requirements can grow quickly.
- Disk IO: If you determine that disk IO is your bottleneck, you'd want to increase the batch size as much as you can as reading contiguous chunks are generally much faster than reading non-contiguous chunks.
- CPU: However in some cases you may find that CPU is your bottleneck - this can happen when repacking from a legacy format like
spora_2_6
, or can happen when repacking many partitions between 2replica.2.9
addresses. The saving grace here is that if CPU is your bottleneck, you can reduce your batch size or footprint size to ease off on your memory utilization.
To control all these factors, repack-in-place has 2 config options:
repack_batch_size
: controls the batch size - i.e. the number of footprints processed at oncerepack_cache_size_mb
: sets the total amount of memory to allocate to the repack-in-place process per partition. So if you setrepack_cache_size_mb
to2000
and are repacking 4 partitions, you can expect the repack-in-place process to consume roughly 8 GiB of memory. Note: the node will automatically set the footprint size based on your configured batch and cache sizes - this typically means that it will reduce the footprint size as much as needed. A smaller footprint size will increase your CPU load as it will result in your node generating the same entropy multiple times. For example, if your footprint size is 256 the node will need to generate teh same entropy 4 times in order to process all 1024 chunks in the full footprint.
Debugging
This release also includes a new option on the data-doctor inspect
tool that may help with debugging packing issues.
/bin/data-doctor inspect bitmap <data_dir> <storage_module>
Example: /bin/data-doctor inspect bitmap /opt/data 36,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI.replica.2.9
Will generate a bitmap where every pixel represents the packing state of a specific chunk. The bitmap is laid out so that each vertical column of pixels is a complete entropy footprint. Here is an example of bitmap:
This bitmap shows the state of one node's partition 5 that has been repacked to replica.2.9. The green pixels are chunks that are in the expected replica.2.9 format, the black pixels are chunks that are missing from the miner's dataset, and the pink pixels are chunks that are too small to be packed (prior to partition ~9, users were allowed to pay for chunks that were smaller than 256KiB - these chunks are stored unpacked
and can't be packed).
New prometheus metrics
ar_mempool_add_tx_duration_milliseconds
: The duration in milliseconds it took to add a transaction to the mempool.reverify_mempool_chunk_duration_milliseconds
: The duration in milliseconds it took to reverify a chunk of transactions in the mempool.drop_txs_duration_milliseconds
: The duration in milliseconds it took to drop a chunk of transactions from the mempooldel_from_propagation_queue_duration_milliseconds
: The duration in milliseconds it took to remove a transaction from the propagation queue after it was emitted to peers.chunk_storage_sync_record_check_duration_milliseconds
: The time in milliseconds it took to check the fetched chunk range is actually registered by the chunk storage.fixed_broken_chunk_storage_records
: The number of fixed broken chunk storage records detected when reading a range of chunks.mining_solution
: replaced themining_solution_failure
andmining_solution_total
with a single metric, using labels to differentiate the mining solution state.chunks_read
: The counter is incremented every time a chunk is read fromchunk_storage
chunk_read_rate_bytes_per_second
: The rate, in bytes per second, at which chunks are read from storage. The type label can be 'raw' or 'repack'.chunk_write_rate_bytes_per_second
: The rate, in bytes per second, at which chunks are written to storage.repack_chunk_states
: The count of chunks in each state. 'type' can be 'cache' or 'queue'.replica_2_9_entropy_generated
: The number of bytes of replica.2.9 entropy generated.
Bug fixes and improvements
- Several updates to the mining cache logic. These changes address a number of edge case performance and memory bloat issues that can occur while mining.
- Guidance on setting the
mining_cache_size_mb
config: for now you can set it to 100x the number of partitions you are mining against. So if you are mining against 64 partitions on your node you would set it to6400
.
- Guidance on setting the
- Improve the transaction validation performance, this should reduce the frequency of "desyncs". I.e. nodes should now be able to handle a higher network transaction volume without stalling
- Do not delay ready_for_mining on validator nodes
- Make sure identical tx-status pairs do not cause extra mempool updates
- Cache the owner address once computed for every TX
- Reduce the time it takes for a node to join the network:
- Do not re-download local blocks on join
- Do not re-write written txs on join
- Reduce per peer retry budget on join 10 -> 5
- Fix edge case that could occasionally cause a mining pool to reject a replica.2.9 solution.
- Fix edge case crash that occurred when a coordinated miner timed out while fetching partitions from peers
- Fix bug where storage module crossing weave end may cause syncing stall
- Fix bug where crash during peer interval collection may cause syncing stall
- Fix bug where we may miss VDF sessions when setting
disable vdf_server_pull
- Fix race condition where we may not detect double-signing
- Optionally fix broken chunk storage records on the fly
- Set
enable fix_broken_chunk_storage_record
to turn the feature on.
- Set
Full Changelog: N.2.9.5-alpha2...N.2.9.5-alpha3
Community involvement
A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!
Discord users (alphabetical order):
- BerryCZ
- bigbang
- BloodHunter
- Butcher_
- core_1_
- doesn't stay up late
- dzeto
- edzo
- Evalcast
- EvM
- grumpy.003
- Iba Shinu
- JamsJun
- jimmyjoe7768
- lawso2517
- MaSTeRMinD
- metagravity
- Qwinn
- radion_nizametdinov
- RedMOoN
- sam
- smash
- sumimi
- tashilo
- Vidiot
- wybiacx
Release 2.9.4.1
This release fixes a bug in the mining logic that would cause replica.2.9
hashrate to drop to zero at block height 1642850. We strongly recommend all miners upgrade to this release as soon as possible - block height 1642850 is estimated to arrive at roughly April 4 at 11:30a UTC.
If you are not mining, you do do not need to upgrade to this release.
This release is incremental on the 2.9.4 release and does not include any changes from the 2.9.5-alpha1 release.
Full Changelog: N.2.9.4...N.2.9.4.1
Release 2.9.5-alpha2
This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
This release includes several bug fixes. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you believe one of the listed bug fixes will improve your mining experience.
Changes
- Apply the 2.9.4.1 patch to the 2.9.5 branch. More info on discord
- Optimization to speed up the collection of peer intervals when syncing. This can improve syncing performance in some situations. Code changes.
- Fix a bug which could cause syncing to occasionally stall out. Code changes
- Bug fixes to address
chunk_not_found
andsub_chunk_mismatch
errors. Code changes - Add support for DNS pools (multiple IPs behind a single DNS address). Code changes
- Publish some more protocol values as metrics. Code changes
- Optimize the shutdown process. This should help with, but not fully address, the slow node shutdown issues. Code changes
- Add webhooks for the entire mining solution lifecycle. New
solution
webhook added with multiple statessolution_rejected
,solution_stale
,solution_partial
,solution_orphaned
,solution_accepted
, andsolution_confirmed
. Code changes - Add metrics to allow tracking mining solutions:
mining_solution_failure
,mining_solution_success
,mining_solution_total
. Code changes - Fix a bug where a VDF client might get pinned to a slow or stalled VDF server. Code changes
Community involvement
A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!
Discord users (alphabetical order):
- BerryCZ
- bigbang
- BloodHunter
- Butcher_
- dlmx
- doesn't stay up late
- edzo
- Iba Shinu
- JF
- lawso2517
- MaSTeRMinD
- MCB
- qq87237850
- Qwinn
- RedMOoN
- smash
- sumimi
- T777
- Thaseus
- Vidiot
- Wednesday
Release 2.9.5-alpha1
This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
This release includes several bug fixes. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you believe one of the listed bug fixes will improve your mining experience.
verify
Tool Improvements
This release contains several improvements to the verify
tool. Several miners have reported block failures due to invalid or missing chunks. The hope is that the verify
tool improvements in this release will either allow those errors to be healed, or provide more information about the issue.
New verify
modes
The verify
tool can now be launched in log
or purge
modes. In log
mode the tool will log errors but will not flag the chunks for healing. In purge
mode all bad chunks will be marked as invalid and flagged to be resynced and repacked.
To launch in log
mode specify the verify log
flag. To launch in purge
mode specify the verify purge
flag. Note: verify true
is no longer valid and will print an error on launch.
Chunk sampling
The verify
tool will now sample 1,000 chunks and do a full unpack and validation of the chunk. This sampling mode is intended to give a statistical measure of how much data might be corrupt. To change the number of chunks sampled you can use the the verify_samples
option. E.g. verify_samples 500
will have the node sample 500 chunks.
More invalid scenarios tested
This latest version of the verify
tool detects several new types of bad data. The first time you run the verify
tool we recommend launching it in log
mode and running it on a single partition. This should avoid any surprises due to the more aggressive detection logic. If the results are as you expect, then you can relaunch in purge
mode to clean up any bad data. In particular, if you've misnamed your storage_module
the verify
tool will invalidate all chunks and force a full repack - running in log
mode first will allow you to catch this error and rename your storage_module
before purging all data.
Bug Fixes
- Fix several issues which could cause a node to "desync". Desyncing occurs when a node gets stuck at one block height and stops advancing.
- Reduce the volume of unnecessary network traffic due to a flood of
404
requests when trying to sync chunks from a node which only servesreplica.2.9
data. Note: the benefit of this change will only be seen when most of the nodes in the network upgrade. - Performance improvements to HTTP handling that should improve performance more generally.
- Add TX polling so that a node will pull missing transactions in addition to receiving them via gossip
Known issues
- multi-node configuration with the new entry-point script for arweave. A complete description of the problem, a patch and a procedure can be found here: https://gist.github.com/humaite/de7ac23ec4975518e092868d4b4312ee
Community involvement
A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!
Discord users (alphabetical order):
- AraAraTime
- BerryCZ
- bigbang
- BloodHunter
- Butcher_
- dlmx
- dzeto
- edzo
- EvM
- Fox Malder
- Iba Shinu
- JF
- jimmyjoe7768
- lawso2517
- MaSTeRMinD
- MCB
- Methistos
- Michael | Artifact
- qq87237850
- Qwinn
- RedMOoN
- smash
- sumimi
- T777
- Thaseus
- Vidiot
- Wednesday
- wybiacx
What's Changed
Full Changelog: N.2.9.4...N.2.9.5-alpha1
Release 2.9.4
This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
This release includes several bug fixes. We recommend upgrading, but it's not required. All releases 2.9.1 and higher implement the consensus rule changes for the 2.9 hard fork and should be sufficient to participate in the network.
Note: this release fixes a packing bug that affects any storage module that does not start on a partition boundary. If you have previously packed replica.2.9
data in a storage module that does not start on a partition boundary, we recommend discarding the previously packed data and repacking the storage module with the 2.9.4 release. This applies only to storage modules that do not start on a partition boundary, all other storage modules are not impacted.
Example of an impacted storage module:
storage_module 3,1800000000000,addr.replica.2.9
Example of storage modules that are not impacted:
storage_module 10,addr.replica.2.9
storage_module 2,1800000000000,addr.replica.2.9
storage_module 0,3400000000000,addr.replica.2.9
Other bug fixes and improvements:
- Fix a regression that caused
GET /tx/id/data
to fail - Fix a regression that could cause a node to get stuck on a single peer while syncing (both
sync_from_local_peers_only
and syncing from the network) - Limit the resources used to sync the tip data. This may address some memory issues reported by miners.
- Limit the resources used to gossip new transactions. This may address some memory issues reported by miners.
- Allow the node to heal itself after encountering a
not_prepared_yet
error. The error has also been downgraded to a warning.
Community involvement
A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!
Discord users (alphabetical order):
- AraAraTime
- bigbang
- BloodHunter
- Butcher_
- dlmx
- dzeto
- Iba Shinu
- JF
- jimmyjoe7768
- lawso2517
- MaSTeRMinD
- MCB
- Methistos
- qq87237850
- Qwinn
- RedMOoN
- sam
- T777
- U genius
- Vidiot
- Wednesday
What's Changed
Full Changelog: N.2.9.3...N.2.9.4
Release 2.9.3
This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
This is a minor release that fixes a few bugs:
- sync and pack stalling
ready_for_work
error whensync_jobs = 0
- unnecessary entropy generated on storage modules that are smaller than 3.6TB
- remove some overly verbose error logs
What's Changed
Full Changelog: N.2.9.2...N.2.9.3
Release 2.9.2
Arweave 2.9.2
This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
Bug Fixes / Improvements
- Fix a bug where the node would not sync new data to disks that were 95-99% full
- Fix a bug causing an error message like
[error] ar_chunk_copy:do_ready_for_work/2:135 event: worker_not_found, module: ar_chunk_copy, call: ready_for_work, store_id: default
- Fix a bug preventing the node from launching on some old Xeon processors
- Improve the efficiency of sharing newly uploaded data between peers
- Small performance improvement when preparing entropy
- Small performance improvement when syncing from peers
- Add two more checks to the
verify
tool. These checks will identify some scenarios which resulted in a partition having data packed to two formats. In those cases running theverify
tool should flag the incorrectly packed chunks as invalid so that they can be synced and repacked.
Community involvement
A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!
Discord users (alphabetical order):
- AraAraTime
- Butcher_
- dlmx
- dzeto
- JF
- jimmyjoe7768
- lawso2517
- MaSTeRMinD
- Methistos
- Michael | Artifact
- qq87237850
- Qwinn
- RedMOoN
- sam
- some1else
- sumimi
- T777
- U genius
- Vidiot
- Wednesday
What's Changed
Full Changelog: N.2.9.1...N.2.9.2
Release 2.9.1
Arweave 2.9.1
This Arweave node implementation proposes a hard fork that activates at height 1602350, approximately 2025-02-03 14:00 UTC. This software was prepared by the Digital History Association, in cooperation with the wider Arweave ecosystem. Additionally, this release was audited by NCC Group.
Note: with 2.9.1 when enabling the randomx_large_pages
option you will need to configure 5,000 HugePages
rather than the 3,500 required for earlier releases.
Replica 2.9 Format
The primary focus of this release is to complete the implementation, validation, and testing of the Replica 2.9 Format introduced in the previous "early adopter" release: 2.9.0-early-adopter. Those release notes are still a good source of information about the Replica 2.9 Format.
With this 2.9.1 release the Replica 2.9 Format is ready for production use. New and existing miners should consider packing or repacking to the replica.2.9
format.
Note: If you have replica.2.9
data that was previously packed with the 2.9.0-early-adopter release, please delete it before running 2.9.1. There are changes in 2.9.1 which render it incompatible with previously packed replica.2.9
data. spora_2_6
and composite
data is unaffected.
Benefits of the Replica 2.9 Format
Arweave 2.9’s format enables:
- Allow miners to read from their drives at a rate of 5 MiB/s (the equivalent of difficulty 10 in Arweave 2.8), without adversely affecting the security of the network. This represents a decrease of 90% from Arweave 2.8, and 97.5% from Arweave 2.7.x. This will allow miners to use the most cost efficient drives in order to participate in the network, while also lowering pressure on disk I/O during mining operations.
- A ~96.9% decrease in the compute necessary to pack Arweave data when compared to
2.8 composite.1
, and SPoRA_2.6. This decrease also scales approximately linearly for higher packing difficulties. For example, for miners that would have packed with Arweave 2.8 to the difficulty necessary to reach a 5 MB/s read speed (composite.10
), Arweave 2.9 will require ~99.56% less energy and time. This represents an efficiency improvement of 32x against 2.7.x and 2.8 composite.1, and ~229x for composite.10.
Packing Performance
Arweave packing consists of two phases:
- Entropy generation
- Chunk enciphering
In prior packing formats (e.g. spora_2_6
and composite
) those phases were merged: for each chunk a small bit of entropy was generated and then the chunk was enciphered. Historically the entropy generation has been the bottleneck and main driver of CPU usage.
With replica.2.9
the phases are separated. Entropy is generated for many chunks, and then that entropy is read and many chunks are enciphered. The entropy generation phase is many times faster than it was for spora_2_6
and composite
- in our benchmarks a single node is able to generate entropy for the full weave in ~3 days. The CPU requirements for the enciphering phase are also quite low as enciphering is now a lightweight XOR operation. The end result is that now disk IO is the main bottleneck when packing to replica.2.9
.
We have updated the docs to provide guidance on how to approach repacking to replica.2.9
: Syncing and Packing Guide
We are working on a follow-up release which will attempt to further optimize the disk IO phase of the packing process.
Changes from the 2.9.0-early-adopter release
- Previously there was a limitation which degraded packing performance for non-contiguous storage modules. This has been addressed. You can now pack singular and non-contiguous storage modules with no impact on packing performance.
- All modes of packing to
replica.2.9
are supported. I.e. "sync and pack", "cross-module repack", and "repack-in-place" are all supported. However you are not yet be able to repack fromreplica.2.9
to any other format. - Overall packing performance has improved. Further work is needed to streamline the disk IO during the packing process.
- The
packing_rate
flag is now deprecated and will have no impact. It's been replaced by thepacking_workers
flag which allows you to set how many concurrent worker threads are used while packing. The default is the number of logical cores in the system. - The
replica_2_9_workers
flag controls how many storage modules the node will generate entropy for at once. Only one storage module per physical device will have entropy generated at a time. The default is 8, but the optimal value will vary from system to system. - We've update the Metrics Guide with a new Syncing and Packing Grafana dashboard to better visualize the
replica.2.9
packing process.
Support for ECDSA Keys
This release introduces support for ECDSA signing keys. Blocks and transactions now support ECDSA signatures and can be signed with ECDSA keys. RSA keys continue to be supported and remain the default key type.
An upcoming arweave-js release will provide more guidance on using ECDSA keys with the Arweave network.
ECDSA support will activate at the 2.9 hard fork (block height 1602350
).
Composite Packing Format Deprecated
The new packing format was discovered as a result of researching an issue (not endangering data, tokens, or consensus) that affects higher difficulty packs of the 2.8 composite scheme. Given this, and the availability of the significantly improved 2.9 packing format, as of block height 1642850 (roughly 2025-04-04 14:00 UTC), data packed to any level of the composite packing format will not produce valid block solutions.
What's Changed
Full Changelog: N.2.8.3...N.2.9.1
Release 2.9.0-early-adopter
Arweave 2.9.0-Early-Adopter Release Notes
This Arweave node implementation proposes a hard fork that activates at height 1602350, approximately 2025-02-03 14:00 UTC. This software was prepared by the Digital History Association, in cooperation with the wider Arweave ecosystem. Additionally, this release was audited by NCC Group.
This 2.9.0 release is an early adopter release. If you do not plan to benchmark and test the new data format, you do not need to upgrade for the 2.9 hard fork yet.
Note: with 2.9.0 when enabling the randomx_large_pages
option you will need to configure 5,000 HugePages
rather than the 3,500 required for earlier releases.
Replica 2.9 Packing Format
The Arweave 2.9.0-early-adopter release introduces a new data preparation (‘packing’) format. Starting with this release you can begin to test out this new format. This format brings significant improvements to all of the core metrics of data preparation.
To understand the details, please read the full paper here: https://github.com/ArweaveTeam/arweave/blob/release/N.2.9.0-early-adopter/papers/Arweave2_9.pdf
Additionally, an audit of this mechanism was performed by NCC group and is available to read here (the comments highlighted in this audit have since been remediated): https://github.com/ArweaveTeam/arweave/blob/release/N.2.9.0-early-adopter/papers/NCC_Group_ForwardResearch_E020578_Report_2024-12-06_v1.0.pdf
Arweave 2.9’s format enables:
- Allow miners to read from their drives at a rate of 5 MiB/s (the equivalent of difficulty 10 in Arweave 2.8), without adversely affecting the security of the network. This represents a decrease of 90% from Arweave 2.8, and 97.5% from Arweave 2.7.x. This will allow miners to use the most cost efficient drives in order to participate in the network, while also lowering pressure on disk I/O during mining operations.
- A ~96.9% decrease in the compute necessary to pack Arweave data when compared to
2.8 composite.1
, and SPoRA_2.6. This decrease also scales approximately linearly for higher packing difficulties. For example, for miners that would have packed with Arweave 2.8 to the difficulty necessary to reach a 5 MB/s read speed (composite.10
), Arweave 2.9 will require ~99.56% less energy and time. This represents an efficiency improvement of 32x against 2.7.x and 2.8 composite.1, and ~229x for composite.10.
Replica 2.9 Benchmark Tool
If you'd like to benchmark the performance of the new Replica 2.9 packing format on your own machine you can use the new ./bin/benchmark-2.9 tool. It has 2 modes:
- Entropy generation which generates and then discards entropy. This allows you to benchmark the time it takes for your CPU to perform the work component of packing, ignoring any IO-related effects.
- To use the entropy generation benchmark run the tool without using any dir flags.
- Packing which generates entropy, packs some random data, and then writes it to disk. This provides a more complete benchmark of the time it might take your server to pack data. Note: This benchmark does not include unpacking or reading data (and associated disk seek times).
- To use the packing benchmark mode specify one or more output directories using the multi-use dir flag.
Usage: benchmark-2.9 [format replica_2_9|composite|spora_2_6] [threads N] [mib N] [dir path1 dir path2 dir path3 ...]
format: format to pack. replica_2_9, composite.1, composite.10, or spora_2_6. Default: replica_2_9.
threads: number of threads to run. Default: 1.
mib: total amount of data to pack in MiB. Default: 1024.
Will be divided evenly between threads, so the final number may be
lower than specified to ensure balanced threads.
dir: directories to pack data to. If left off, benchmark will just simulate
entropy generation without writing to disk.
Repacking to Replica 2.9
As well as allowing you to run benchmarks, the 2.9.0-early-adopter release also allows you to pack data for the 2.9 format. It has not, however, been fully optimized and tuned for the new entropy distribution scheme. It is included in this build for validation purposes. In our tests, we have observed consistent >=75% reductions in computation requirements (>4x faster packing speeds), but future releases will continue to improve this towards the performance of the benchmarking tool.
To test this functionality run a node with storage modules configured to use the <address>.replica.2.9
packing format. repack_in_place
is not yet supported.
Composite Packing Format Deprecated
The new packing format was discovered as a result of researching an issue (not endangering data, tokens, or consensus) that affects higher difficulty packs of the 2.8 composite scheme. Given this, and the availability of the significantly improved 2.9 packing format, as of block height 1642850 (roughly 2025-04-04 14:00 UTC), data packed to any level of the composite packing format will not produce valid block solutions.
Note: This is an "Early Adopter" release. It implements significant new protocol improvements, but is still in validation. This release is intended for members of the community to try out and benchmark the new data preparation mechanism. You will not need to update your node for 2.9 unless you are interested in testing these features, until shortly before the hard fork height at 1602350 – approximately Feb 3, 2025. As this release is intended for validation purposes, please be aware that there is a possibility that data encoded using its new preparation scheme may need to be repacked before 2.9 activates. The first ‘mainline’ releases for Arweave 2.9 will follow in the coming weeks after community validation has been completed.
Full Changelog: N.2.8.3...N.2.9.0-early-adopter
Release 2.8.3
This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
Bug fixes
- Fix a performance issue which could cause very low read rates when multiple storage modules were stored on a single disk. The bug had a significant impact on SATA read speeds and hash rates, and noticeable, but smaller, impact on SAS disks.
- Fix a bug which caused the Mining Performance Report to report incorrectly for some miners. Notably: 0s in the
Ideal
andData Size
columns. - Fix a bug which could cause the
verify
tool to get stuck when encountering aninvalid_iterator
error - Fix a bug which caused the
verify
tool to fail to launch with the errorreward_history_not_found
- Fix a performance issue which could cause a node to get backed up during periods of high network transaction volume.
- Add the
packing_difficulty
of a storage module to the/metrics
endponit
Community involvement
A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!
Discord users (alphabetical order):
- bigbang
- BloodHunter
- Butcher_
- dzeto
- edzo
- foozoolsanjj
- heavyarms1912
- JF
- MCB
- Methistos
- Mastermind
- Qwinn
- Thaseus
- Vidiot
- a8_ar
- jimmyjoe7768
- lawso2517
- qq87237850
- smash
- sumimi
- T777
- tashilo
- thekitty
- wybiacx
What's Changed
- Add node introspection options by @shizzard in #651
- Release/performance 2.8 by @JamesPiechota, @humaite, @vird, @ldmberman in #656
Full Changelog: N.2.8.2...N.2.8.3