- LIVE UPDATING DATA -
Even with the best intentions, it's likely that you've created at-least one tangled mess with GraphQL. In theory it should simplify things, but in practise the world is always hard.
Ideally a GraphQL server should be present a schematic representation of data and their relations, and enable a client to make a single request to query all the data they need - and no more - . In practise data is not static, so subscriptions and multiple re-queries are necessary. Also the schema changes, and when nesting of relations grow, so does the payload.
Eventually, either the nesting grows and you accept the hit on the payload size. Or you break up relations, and fire multiple queries. As long as the first option (Over Quering) is doable you are fine, but eventually the load on the client and server will become problematic.
Somebody toucha my spaghet
Breaking up relations server-side, means repairing those relations at client-side. This seems acceptable, however if you stop and think about it, you are create a Tightly Coupled system. Not only must the client understand the data, now it should also understand relations between data. This greatly increases complexity.
To make matters even worse, not only must the client understand relations, and repair those relations client-side, it also must subscribe to all updates of each relevant event. So it must know what those relevant events are.
This should be the task of the server, but now the client is burdened with understanding the landscape behind the GrapQL server.
When starting this quest, usually Redux is introduced, and the basis for a nice spaghetti is prepared. Toss in some more subscriptions, add partial querying, garnish with optimistic rendering and tangled mess is complete.
Take for example this design below. It represents a basic flow where you
Posts from the server, which include an
Autor. Also two
subscriptions for updates on
Authors is created, for when data changes.
Then when new data arrives at the server, it informs the client via the
subscription, which in turn executes the
Imagine an update to an
Author, it could be that re-executing the entire
Posts query is doable. That would be massive Over Querying since the posts didn't change, just some nested data. Or imagine there is an
Author in the body of the subscription (or a separate
Author query). Now the client must update all of its local
Post data. Now image that
Authors also have nested data, like
That's already a lot of work, and it's just a simple example, this grows to a complex product quite rapidly. So it's either High Complexity, or Over Querying.... or is it?
Amazing improved flow - Live Subscriptions
No, there is a better way, a new way of doing things. Why settle for High Complexity, or even partial Over Querying? Why not just receive the data that has been changed? Let the GraphQL work out all resolving of complex relations, and just send the
diff between the new state and the current state?
With just minimal impact on existing client/server, Live Subscriptions can workout the diff, and send the tiniest possible payload with all relations intact. No more relations management in the client, no more Over Querying whatsoever. Even when just one field of an entity is changed, Live Subscriptions will just update that one field, without sending the entire entity.
Sure there are some considerations, let's adres them:
Is this completely revolutionary?
No Live Updates have been a thing for a while, just not in GraphQL. Take for example Firebase, there it has been the default ever since.
Why isn't this in the core framework?
Well, GraphQL solves quit a few issues that traditional REST has, but it isn't perfect.
Is it production tested?
Yes! It is. In fact, it is operational in Mission Critical Applications right now. It's robust and ready to safely use.
It all sounds too good to be true?
Yeah, it does doesn't it? Please read more in this blog for implementation details and further considerations. But it actually is as good as it is sold.