Description
I'd like to understand the scalability of the decision to write access controls along with every resource. I'm in pretty deep waters here, my intuition may be completely off, but at the very least, we should possibly provide some implementation advice, so here goes:
If we write an access policy to all resources, I assume that we need to block all access until all these are written. If not, we might have a situation where a controller writes an access policy, only to see it take effect much later. So, we have to make sure this operation can be done quickly.
Intuitively, a URI space is represented by a trie, and searching a trie has an average time complexity of O(log n) and a worst case of O(n). Since we know nothing on how people will organize their URI space, I think we should assume the worst case.
My thinking is further that n is far from a constant, I certainly hope that Solid takes a lion's share of an exponential future, so n(t) ~ exp(t). If the trie assumption is correct, then the write speeds will therefore get exponentially slower in the worst case or linearly slower in the average case when we get bigger data. In addition, for every resource, there's an insert into a set, but I suppose that's a constant time operation these days.
I suppose that volatile memory is getting exponentially faster, and SSDs seems to be going that way too, but HDDs aren't, they have had a linear performance development. And then, there's the price/performance. I haven't been following that too closely.
It had me worried to read that every change of permissions would have to be propagated this way, could anyone please comfort me on this one?