Skip to content

Latest commit

 

History

History
140 lines (123 loc) · 13 KB

30-28August2019.md

File metadata and controls

140 lines (123 loc) · 13 KB

dat protocol WG (#NUMBER)

August 28th, 2019 via IRC (#datprotocol)

Agenda discussion

People

  • noffle
  • rangermauve
  • mafintosh
  • pfrazee

Agenda

  • Catch up with what everyone has been doing
  • Review last meeting/action items

Catch up

  • mafintosh
    • mafintosh: We are just missing one blocker now which is the cap system release. Which I think will happen tmw or Friday
    • rangermauve: Is there a branch with progress that we could track?
    • mafintosh: the cap systme is a new module / update to hypercore-protocol. I can put a link here when it’s up
    • noffle: what is the cap system? any links, if explanation is too long?
    • mafintosh: the updated capability system. The “know the hypercore key before you can download it”. the new one makes it so you need to know any hypercore shared on same stream before hand and not just the forst one
    • rangermauve: It was so you could handshake with people without having to start with the same key IIRC, right?
    • mafintosh: yes
  • pfrazee
    • pfrazee: last week Beaker got a tool to track forks of sites and to diff/merge those forks, and this week I've been moving Beaker to use the full "hyperdrive filesystem" where all user data is stored in a private root dat which will be great for syncing your browser data between devices. yeah so really getting down to the last items on the 0.9 todo list. Pretty sure we'll get a private beta started next month
    • rangermauve: Sweet. Private how? Like, invite-only?
    • pfrazee: I'll probably start sharing a build with a few folks. want to have everything stable before I start doing the public beta
  • rangermauve
    • rangermauve: Last week I finished up hyperswarm-proxy, and this week I added the DatArchive API to the SDK. Now I'm working on getting the new hyperswarm and hyperdrive into the SDK.
  • noffle
    • noffle: hey friends! yeah, lots of hypercore things lately for me :) a few things:
      1. no updates on hypercore perf issues we've been seeing on cabal (we're @ 480 feeds on the public cabal!) there's a comment in hypercore#lib/replicate.js or lib/bitfield.js about using an index for bitfields that could speed up perf a lot. eager to figure out how to advance on that. we also see connections made + almost immediately lost; not sure why cabal connections seem to struggle to be long-lived.
      2. I put some work into reporting progress on hypercore replication, see https://github.com/noffle/hypercore-progress it basically works, but there are edge cases that hint at limits of my hypercore understanding, like how when a core is writable it doesn't seem to send WANTs (since it knows it has its own entries already?) but mapeo priorities kinda have me working elsewhere for now; not sure when I'll get to pick that up again. would love to see it in hypercore core
      3. been invesigating a hang in hypercore replication, see holepunchto/hypercore#223 I hacked in a timeout in mapeo for now. feeling a bit stuck on my investigation on that; hoping someone who knows more can pick up that work
    • noffle: tl;dr seeking support on 1-3
    • pfrazee: I'm glad youre doing that work, the scale tests and the progress information (and the bug fixing) is all useful
    • noffle: oh one more thing. this work has got me thinking about how few core dat devs there are, that really grok the low level bits. I wonder if there are opportunities to build up that set of core devs. and lighten the bus factor, etc.
    • noffle: I've been thinking about running weekly "office hours", and thought how useful it'd be if experienced folx like maf were available in that way on a regular basis, for knowlege sharing

Summary/Review of Previous Meeting

Meeting Notes

  • Community Organizing / Documentation:
    • pfrazee: one thing we've been discussing is, after dat2 is out, mafintosh could move to a more comms-focused role
    • RangerMauve: What sort of timeline where y'all thinking about for that BTW? And what would it look like in practice
    • pfrazee: we're thinking that releasing dat2 will inherently require a lot more comms anyway. doc-writing, DEP-writing, handling issues, etc. so the idea is, maf will start doing that to finish up the dat2 release and then just stick with it. larger code tasks will be given to other people under his guidance/direction (depending on if they're in team or in community)
    • rangermauve: noffle, Regarding low-level knowledge, would getting more people to read "How Dat Works" help? Or is that missing a lot too?
    • noffle: that's an incredibly useful doc. def more visibility. but there's even more detail not in there. like how hypercore decides sync is done, or the edge cases in how the msgs are sent and the bitfield ops that happen under the blanket. so much detail ;)
    • pfrazee: it's also going to need updates for dat2. I think a lot of that detail should probably be in the DEPs
    • noffle: as in already is, or ought to be? i actually haven't dug into those yet
    • pfrazee: both, depending on how recent or complete the DEP is (varies, you know)
    • rangermauve: Yeah, details in the DEPs would be good. I think it'd be good to try to split up that work rather than getting mafintosh to have to do all of it or something though. 😅
    • noffle: def
    • mafintosh: yep!
    • rangermauve: Hmmm, I think in SSB they tried to improve docs by splitting tasks up into small bits and crowdsourcing. Does that sound like something we could try doing?
    • pfrazee: I'd suggest we rehire duncan to update "How dat works" (can have BLL contribute some $) and then we'll have to handle the DEPs the old way, talk about what's needed and then assign authors
    • rangermauve: Yeah, that'd be great! I think Karissa is coming back from vacation next week so it'll be good to figure out how we can fund that
    • mafintosh: i think the most important thing is that dat2 gets to an overall stable point
    • rangermauve: Yeah, so I guess a bunch of still will be in a bit of a limbo until then? Like the protocol issues noffle is facing?
    • mafintosh: sure lots of the internals will need tweaks but puts us in a situation where internal docs aren’t a waste cause they’ll get invalidated asap
    • noffle: maybe more + clearer code comments could help with that. so they update as the code does for nonobvious code. there's a LOT of that :D
    • mafintosh: I’d rather write more docs. the code will always be complex. It’s an incredible complex thing it does. but the flow data structures are pretty understandable once you know them
    • noffle: all the more reason for good docs, whether comments or external or both
    • noffle: I think the clarity of how the core works & the # of core devs are related & right now there are very few core devs :)
    • RangerMauve: Yeah, maybe after we get this dat2 stuff figured out we should look at how to train up more people in tandem with the docs writing? K, well I guess that'll need to be figured out more at a later date
  • Loosening parsing in dat-dns TXT records
    • rangermauve: another thing to discuss is this DNS thing: dat-ecosystem-archive/DEPs#62
    • pfrazee: ryanramage says he's found some DNS providers don't properly handle multiple TXT records, and therefore has to bundle multiple values into a single record with some kind of delimiter. the current dat-dns code doesnt allow for that, so he's asking that we loosen the parsing to potentially allow other values to co-exist with the datkey=$KEY record. I think the reasoning is fair but I wanted to check in with everybody before I give it the 👍
    • rangermauve: I guess the main question is which delimiter we should have by default. What do others do in this case? By that I mean, I'm all for loosening the parsing
    • pfrazee: I'm not sure we need to spec a delimiter. We could just look for datkey=[0-9a-f]{64} and use the first match
    • mafintosh: ya that’s pretty pragmatic
    • rangermauve: Should we ask him to ammed the DEP with this logic or make a new one?
    • pfrazee: just amend the DEP I think. okay I replied, will take the lead on that
  • multiwriter
    • rangermauve: So, mafintosh, paul and I have been talking to multiwriter folks for a while, and it'd be nice if you could give a brief overview of what you had in mind for it so we can point people to it
    • pfrazee: let's table that, I'm going to talk with maf about it and put together a doc
    • RangerMauve: Oh ok, fair enough
  • ciphercore
    • rangermauve: Any thoughts about getting ciphercore mixed into the ecosystem within the SDK and Beaker? https://github.com/telamon/ciphercore Basically, encrypted at rest archives. (or hypercores)
    • noffle: makes me think of middleware more generally. I can imagine eg. a cabal that wants to encrypt each channel with different keys. not just one
    • mafintosh: At rest encryption is extremely complex in practice so would need a thorough review. also my gut tells me at rest shoudl be solved by a random-access module
    • pfrazee: I think this isn't just at-rest. it's over the wire as well, goal being you could backup to a peer without giving read access
    • rangermauve: Sorry, yeah that wasn't the right term
    • pfrazee: I'm fairly sure that's something that will be on the dat roadmap at some point
    • mafintosh: prob but too early right now
    • pfrazee: okay so rangermauve I think the answer might be, people can do custom stuff but there's not a fast upstreaming process here because it's a complex update. which I'm +1 on as well, I wouldn't want to bless an encryption solution without it going through some rigour
    • rangermauve: Fair enough. So people should mess around with it at the application layer and we'll figure out how to get it into hyperdrive or whatever later
    • mafintosh: prob something to look at when we do hit that in the future
    • rangermauve: Well, one of the reasons I ask is that cobox is hitting that now with the stuff they're doing in PeerFS
    • Mafintosh: given our miminum resources we should be really keen on just focusing on our core experience also
    • pfrazee: I suggest we talk to them about encrypting at the file level first. if that's not a strong enough solution, we can talk them through a short-term solution
    • rangermauve: Fair enough
  • MDNS changes in hyperswarm
    • rangermauve: is anyone interested in talking about the MDNS changes I proposed? hyperswarm/discovery#7
    • pfrazee: I like the idea. minimize multicast as much as possible
    • rangermauve: I was thinking of tackling it sometime after integrating hyperswarm into the SDK
    • pfrazee: I dont see any problem there, mafintosh ?
    • rangermauve: Maybe I should write up a DEP for it first? :P
    • mafintosh: there might be some hidden complexities
    • rangermauve: What sort of stuff should I watch out for?
    • mafintosh: since it’s not service disc but peer disc rhe protocol might fight you. ie everyone is a server/client but you don’t wanna find yourself. that’s pretty easy in the current dns way
    • rangermauve: Well, I pictured it as service disc for the "hyperswarm DHT" and then going from there. Like, a local-only hyperswarm DHT
    • mafintosh: Ah, its the dht one sorry
    • rangermauve: Yeah yeah. Basically, we reduce MDNS traffic to bootstrapping into a local DHT and then do all lookups against that
    • mafintosh: it’s gonna be a lot more traffic for smaller networks and less for huge ones
    • rangermauve: More traffic for MDNS advertising for every key?
    • mafintosh: yea a lot more
    • rangermauve: I would have thought that querying the DHT for a key would be less overall traffic. Where would it be coming from?
    • mafintosh: announces and dht pings since the topology is small (For a dht scale)
    • rangermauve: Hmm. What would be a good path forward for handling lots of keys on a local network like Ink and Switch did?
    • mafintosh: it’s prob much more efficient to just group dns records and do a randomised delay to see if another peer responded for you. but if your local network is massive the dht is better. massive meaning 1000s of computers
    • rangermauve: Yeah, I think that'll be something we'd run into in the solarpunk future with pirate mesh networks
    • mafintosh: we actually support storing local addrs in the dht also. hat’s prob the way to scale that case assuming you have internet access
    • rangermauve: K I'll update the issue with that and keep thinking, then. :P

Decisions Made

  • dat-DNS got approved

ACTION ITEMS

  • pfrazee: Follow up on dat-DNS changes
  • RangerMauve: update MDNS DHT issue with mafintosh's comments.
  • pfrazee: Talk to mafintosh about multiwriter