The subsequent user-journey will be exactly the same as it would be if Queue-it was not implemented, and will still include the benefits that are delivered by the CDN. The queue provides informational metrics such as estimated waiting time and expected time of arrival back to your website, ensuring online fairness for all of your end-users.Īfter waiting in line, the end-users are redirected back to the page again once their turn comes up. When using Queue-it during an end-user overload, end-users are redirected away from the selected page and placed in a first-come, first-served queue. You can therefore apply a queue only for a particular sale, so that only your end-users seeking that sale will be placed in line, while your other end-users easily browse the rest of your website without any disruption. This allows you to only queue end-users on dynamic pages, like cart / check-out pages, and still use the CDN’s static / semi-static cached pages without a queue. You can also set up one or more queues that are triggered on page level. Queue-it can be triggered on the entire site, or just on a page level, e.g. Consider document #56 out of those 100 documents.Queue-it’s online queue system works seamlessly alongside CDN’s without any conflict or problems and still preserves all CDN functionality. suppose you have 10000k-ish collaborative editing sessions going on on 100-ish documents. regarding “this is all about HA rather than perf/max concurrency.” … not really … a ver simplified version of how horizontal scaling happens is that the browser sends a websocket request, and ECS(or whatever you are using) routes that request to a node that has less load.This redis pubsub can easily be replaced with something like gcp cloudpubsub if you have enough scale that cloudpubsub will give you good latencies. And these copies need to be kept in sync via pubsub. Essentially, all servers that are powering the same collaborative session need to have their own copy of the document under collaborative editing. regarding the use of redis pubsub, the sharding happens by documentId … can go deeper into this if needed, but you are welcome to go through the code. purpose of redis queues is to provide a temp storage for most recent updates of a document so that when a new user joins a document which is being actively edited by other users, the new user gets the most fresh state of the document (as persisting updates in the db can take some time).Hope this helps - good luck with your application! Y-serverless: AWS Lambda + DynamoDB to use as a provider - I haven’t read those very carefully to understand what tradeoffs they bring. There are also the serverless approaches, e.g. I don’t know of a Yjs websocket library offhand that does this, although there may be one. If I were designing a system to scale out to accept a large number of concurrent users, I’d probably do some kind of sharding/routing system. You can vertically scale up a single server quite a lot. Until you’ve run some load testing to understand where there is a bottleneck for your particular app’s usage patterns. It is probably easier to start with spawing a y-websocket server on a single VM instance. Now redis-server is the single point of failure unless you use redis-cluster or something similar… but still, this is all about HA rather than perf/max concurrency. It does provide fault-tolerance/high-availability (HA) benefits: if one WS server goes down, other(s) can continue to accept updates. “add more servers to handle more concurrent docs/users”). I also don’t yet understand the purpose of redis queues in that library given how the queue key is constructed in getDocUpdatesKey (src/redis.ts) it looks like the queue key depends only on the doc ID, and does not vary per server, so I don’t see how this would provide a sort of backpressure etc… yeah, I don’t really get what this library is trying to do with the queues.įor the library’s usage of redis pubsub, it appears to provide similar functionality to Redis – Hocuspocus since each message is replicated to every attached yjs websocket server, and there is no sharding/routing, then all the servers should see same/similar CPU and RAM usage and the library does not appear to provide any sort of performance scalability (i.e. I haven’t read the code in depth so I don’t know if it would provide horizontal scalability in the sense of adding capacity (e.g. Hey - interesting, I hadn’t seen that repo yet.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |