Skip to content

research proof-of-service #3

@serapath

Description

@serapath

@todo

  • decide for Encoding/decoding math
  • figure out:
    • Register data => what data should we submit to the chain to verify merkle root
    • Submit proof => figure out the correct format of the proofs (chunk & merkle proof?)
  • evaluate libsodium library regarding the ability of using it for a proof of storage according to @okdistribute & @RangerMauve
  • decide for exact proof-of-service mechanism
    • how do we send the encoded dataset from the encoder to the seeder?
      • => don't we just have to send the instructions for the encoding?
        • and afair that can be public
        • so just onchain or in an event
    • how many copies do we select to optimize matters or erasure encode?
    • what do we do if an error happens in any of the steps?
  • make brotli work
    • grab Nina's js stuff, figure out what's not working, let y'all know so we can have our complete life cycle with the onchain and offchain stuff working at least on cli or as a glorified integration test
    • https://gitter.im/playproject-io/community?at=5e18e9fde0f13b70c967491e
    • lets try to max out quality?
      • we can probably wiggle around the window size as the "random" parameter. we don't particularly care about the actual compression ratio. I think 8 is the highest ratio that is "standard" between implementations
  • investigate asymmetric encryption alternatives to brotli, like RSA or libsodium and the likes and benchmark them (e.g. see local datdot-research repo)

  • PoSer (=Proof-of-Service)
  • PDP (=Proof-of-Data-Possession)
  • PoR (=Proof-of-Retrievability)
  • PoSer = Repeated: PDP + PoR

additional links:

  1. https://storj.io/blog/2018/11/replication-is-bad-for-decentralized-storage-part-1-erasure-codes-for-fun-and-profit/
  2. https://medium.com/@storjproject/why-proof-of-replication-is-bad-for-decentralized-storage-part-2-churn-and-burn-2d7cb8893487
  3. Novel constructions for Proof-of-Replication protocol/research#4
  4. https://protocol.ai/blog/filecoin-proof-of-replication-power-fault-tolerance-research-roadmap/

https://www.opencpu.org/posts/brotli-benchmarks/

the times in the link I shared were in ms I believe
so like 35s to compress 18kb*1000
18mb
and ~200 ms to decompress that file 1000x

brotli libs:

open questions

  1. whats min and max time from validator publishing a block to last node receiving the new block? (probably long)
  2. time to submit for honest seeder to substrate? (probably short)
    • time to encode (=compress) chunks?
  3. time for next validator to include it in block?
  4. how many chunks should be requested by substrate from a seeder?

goal: make it unfeasable for a seeder to produce encoded chunks just-in-time when challenged, but instead they should be forced to have the chunks already before a challenge asks them

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions