This is one area where self hosted solidly wins over anything service oriented: you are in control to the point that you, and not some company somewhere can make the call on when a product is no longer viable. The decoupling that we had with licensed software that you run on infrastructure that you control is pretty close to ideal for the customer. You give the provider of the software a one time payment and after that it literally is your problem. At the same time that is what enabled SaaS, companies do not like it when they have one-shot income over something that might generate recurring revenues, and end-users do not want to have to deal with the hassle of installing, maintaining, backing up, keeping secure and updating all this software.
Even so, every time I sign up for some SaaS component I am very much aware that I'm giving up something precious in terms of control, and every time I read an announcement like this one (or the Inbox one the other day), I'm happy that we do our best to use as little services by outsiders as possible.
Saas can get a product out fast. But yes, unless one has some kind of special contract with a Saas company ensuring the continuity of the service as long as one needs it, never rely on Saas for too long, unless the solution is open source. For all we know Firebase could be gone in 2/3 years...
This is why I really like now.sh. They've put a ton of effort into fitting the patterns already in use for self deployments, with only a tiny bit of overhead on that for env vars and domain configuration (~10 short lines of configs for one of my current projects).
Dammit not inbox. I've been using it since the beginning. You know this is the kind of stuff that prevents me from diving too deep into there tech and products. I love flutter but I'm scared about its fate.
Flutter, Dart, Fuchsia these are some great technologies but they might end up in bin some day.
In the end, I tell myself, I'm just a user. Imagine what must be the state of developers who spent years working on these products but then this happens in entire industry. It's just I expected better from Google.
it's hard to gauge opportunity cost, since architectures fade in and out of popularity and patterns succeed where the frameworks that introduced them fail.
I doubt Rob Pike and Ken Thompson would give up the experience of writing Plan 9; its lessons persist in Go and other systems they've designed. Google's tech stack was pioneered by systems programmers who built their career writing for supercomputers, forced to target commodity hardware.
Patterns have a much longer shelf-life than their originating products.
>it's hard to gauge opportunity cost, since architectures fade in and out of popularity and patterns succeed where the frameworks that introduced them fail.
It might be hard but it's also necessary. That's true for many other things that are hard.
What would the opposite be?
Have people adopting any new technology and losing several years of their lives studying a doomed fad because without evaluating its long term potential and opportunity cost because it's "hard" and "architectures fade in and out of popularity"?
Instead, we should encourage people to do the hard thing, and evaluate things that they intend to study and their opportunity cost before they devote any substantial time with them.
Some will still get it wrong and invest in the wrong tech, but overall doing due diligence before jumping in will be better than just blindly going for whatever catches their fancy or is hyped.
>I doubt Rob Pike and Ken Thompson would give up the experience of writing Plan 9; its lessons persist in Go and other systems they've designed.
That was their job and they were paid for it. And Plan9 was also their own creation and original research.
That's not the same as some third party devoting themselves to a new technology just because it looks cool, it's hyped at the moment, etc.
This is why we need truly distributed webapps rather than monolithic tied-to-domain apps. Doing this right is a hard problem and many of current solutions are still essentially centralized despite their reliance on certain aspects of P2P.
Open standards are superior to the complexity of distributed webapps. I can port my Gmail account to Fastmail with a few clicks. There's no lock in with my email data. I can put my mail anywhere! That should be the goal of all hosted data; portability, so you're never beholden to a SaaS provider.
But this is also starting to not be true. Gmail is very slowly adding features not in smtp/imap and one day they might just break away and have their closed email. This happened on multiple occasions with xmpp. At one point you could write from icq to hangouts to facebook messanger. After companies gathered critical mass of users they turned off federation. This is one of the main reasons people use facebook messanger (almost everyone has it) and this might happen to gmail quite easily. Almost everyone had gmail at some point so this idea is pretty possible. Many people would then keep gmail account just to comunicate with people who dont have email.
I hate to beat the dead horse as I often do on HN, but the support of open standards requires constant vigilance. If Gmail begins to "wall the garden off", you must move to a more open provider (such as Fastmail, but any provider providing standards compliant email works).
Firebase is revolutionary. If you haven't used it, it means you can remove your reliance on all the server side logic you currently maintain. It is a huge philosophical change but is the perfect complement to serverless architectures.
This is smart by Google. AWS AppSync provides much of the same functionality but gives you the benefit (if you already know Graphql) of Graphql. Or, it forces you to learn about graphql, which is also the downside. Forcing Fabric customers into the firebase ecosystem makes it look like this was a cheap acquisition from Twitter after all.
The huge win with either platform commitment is you simply focus on your data and forget about the challenging problem of syncing that data across all the platforms. That's (multi client sync) a big, big, big challenge and no one does it well on their own.
I’ve found it’s only good vs your own stack for small prototypes and MVPs, and that the debt builds up faster than anything else I’ve ever used (the cost too!) but YMMV.
I use the analytics heavily though, the built in network reliability and performance analytics you get for 1 line have literally saved my company millions of dollars. Mentally it was hard to ditch direct GA but it’s really so great.
We follow this exact same architectural approach at hasura.io as well. We are an opensource realtime GraphQL engine on Postgres and just last week we announced event-triggers on Postgres that allows you to call webhooks/serverless functions anytime there is a change in your database. (I work at Hasura)
One additional feature of AppSync is that you're not tied to any particular database implementation, you can use any database (or data source for that matter, including existing REST APIs & Lambda functions).
Just a reminder that, if you critically depend on a IaaS/PaaS/SaaS, you should be able to trivially replace it, sometimes as quickly as overnight. Having the service be completely open source so you can self-host generally helps, as does open protocols and standards.
Especially important when it's a Google service given their historical speed of deprecation and abruptness in announcement.
I think OP means you should write your interfaces with SaaS such that changing to a different provider is relatively easy (just rewrite a little client code or something to fit the new provider to your abstractions).
I guess the nuance of my statement would be, trivially replaceable doesn't mean build a better product.
Good examples of what I consider trivially replaceable would be services like Pingdom, DNS, S3, Cloud SQL since you can either easily build a "good enough" version, switch to another provider, or deploy your own from source.
Good examples of services that are very hard to switch off of are things like Cloud Firestore or AWS Lamba.
Vernor Vinge has a character who comes out of cryo and discovers that code he wrote 500 years ago is still in production. It’s just under layers and layers of abstraction. He proceeds to pwn a bunch of hardware to aid a counter-conspiracy.
Both are great reads. Fire Upon the Deep had some cooler concepts, but narrative-wise I prefer Deepness in the sky. The emotional response I had to the memory-control-slavery aspects made my blood boil.