Wrapping existing developer tools #296
carsonfarmer
started this conversation in
Threads
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The Thread DID discussion outlines a way in which we can make threads more of an open network of users and services. One of the key goals for this new focus for threads is to enable (and encourage) external tool integration. By this we mean the ability to create and push data to the threads network via existing (external) tools such as databases (e.g., MongoDB, PostgreSQL, Redis), chat and messaging protocols (e.g., Matrix and ActivityPub clients), rich text editors (e.g., Quill, Slate, CodeMirror), etc.
Motivations
If the goal of a decentralized network is putting ownership back into the hands of users, then it is critical that users (be they end users or developers) are able to create content using their tools of choice. Why should a developer looking to leverage the threads network in their workflow have to migrate to a new database implementation when there are already lots of great databases out there? Wouldn’t it be more useful if that developer could just use the right database for the job, and then simply have that database sync to the decentralized web automatically? We think so, and this is part of what we are exploring with this document.
As soon as you start to shift from the concept of “threads the database” to “threads the data syncing network”, all sorts of new usage patterns begin to emerge. A particularly important one is local first. The concept of offline-first or local-first software isn’t a new one. There are some great discussions (and summaries) of the ideas out there already. User ownership of data, interoperability, collaboration, offline-capable, etc are all ideals we appreciate.
Remote sync
When thinking about the above constraints (plus a whole lot of technical considerations), it becomes useful to consider the existing threads protocol as a robust syncing mechanism for event sourced data. This is actually a core design of the original threads specification, and provides a useful jumping off point.
If we separate the job data creation from data syncing, we find that event sourcing (ES) and command query responsibility segregation (CQRS) patterns provide a useful framework for designing external tool integrations. CQRS is a design pattern whereby reads and writes are separated into different models, using commands to write data, and queries to read data.
With this pattern in mind, we simply need a) a way to extract events from external tools that is reasonably standardized, b) a transport mechanism to distribute said events, and c) a way to ingest these standardized events into external tools. Enter threads the protocol.
Middleware
Imagine a set of software middlewares that intercept database calls, chat protocol messages, etc, and provides tooling to push, pull, rebase, and stash these local changes with remote thread peers via the threads network.
This basic design is purposefully modular, and is organized roughly into three components: the “local” component (a database wrapper, chat server bot, etc), the “middleware” (extraction and ingestion hooks), and the communication with a “network” (threads). Ideally, the “local” component is minimal, and provides only a light wrapper around a local database. In theory, almost any local database will eventually work. The middleware is similarly light-weight. A small set of sub-modules for hooking into (for example) a local database to extract the things we need to communicate with the network.
That brings us to the “network” component. This is where the magic of syncing to the decentralized web happens. Essentially, we track changes on the local database, and provide low level APIs to push, pull, and resolve any conflicts that arise when syncing with the network. In general, the remote is considered the “source of truth” in any real conflicts. In this sense, one could think of external tool integration as more of a federated system of peers, rather than pure peer-to-peer. As such, each local “network” module? might connect to one thread peer at a time, possibly via trusted relationship, and relies on that daemon for network operations and syncing.
Services
If you combine the above concepts with the concept of services, you can immediately see how traditionally "centralized" database APIs can be exposed via threads services, and then consumed across the network as thread peers exchange and sync data. Peers can advertise their available services via the
"services"
DID field (see the Thread DID discussion for details).Couple service provision and remote database access with the robust authentication and globally identifiable assets afforded by thread DIDs, and we now have a very clear path to onboarding web2 developers (and data) to the decentralized web.
Example
Often, web developers use database connection URIs on the backend to interact with centralized databases. An example of this might be a PostgreSQL database that stores user information. In these cases, developers need to run a server, manage the database, and specify access control mechanisms required to limit database operations. In the proposed thread services scenario, developers may be able to "push" some, or all of these components to the network.
In one scenario, peers may advertise a PostgreSQL service directly in their DID, allowing developers (users) to replace an existing database connection URI with one specified by the peer. This can be conceptualized as a "database as a service". In this case, service discovery would look something like a user providing a query to the registry, with details specifications including throughput, replication, etc. The could be in response to a CI test (short-term service), to increase storage capacity (mid-term service), or even as part of an archiving ETL (long-term service with payment via FIL).
In a similar scenario, a developer may be running their own local database, and wish to query the network for peers that are willing to "sync" their database events to a thread. Once synced to the network, the originating peer could consume these database events in their user-facing dapp(s), on other backend services, or even flush these events to Filecoin via an additional service peer.
Beta Was this translation helpful? Give feedback.
All reactions