Fighting vendor lock-in and designing testable serverless apps

(vacationtracker.io)

144 points | by slobodan_ 1823 days ago

20 comments

  • stickfigure 1823 days ago
    This is what I call fake work.

    This company builds a team vacation tracker. As long as it's reliable, their customers could not give a rat's ass whether it runs in AWS, Google, or Joebob's House Of Ill Compute. Every engineer hour spent creating abstraction layers and docker containers and whiz-bang multiplatform plumbing is an hour that could have been spent working on something your users actually care about.

    Switching clouds is a cost optimization. You have to be an enormous company (or in a resource-intensive domain) before this is more productive than building features. It means you've moved out of the growth phase and into the "how do I milk what I've got" phase.

    Vendor lock-in is something that engineers care about because they like playing with tech. It's cool switching platforms or databases or whatnot because it really feels like hard, sophisticated work! But it doesn't move the needle.

    • theknarf 1823 days ago
      > Due to a change in how they report data usage, our monthly costs for Firebase, a SaaS provided by Google, has increased from $25 a month to what is now moving towards the $2,000 mark — with no changes to our actual data use. This change was made without warning.

      ref https://news.ycombinator.com/item?id=14356409

      Vendor lock-in can make or break startups if you're not careful enough.

      • gambler 1823 days ago
        >Vendor lock-in can make or break startups if you're not careful enough.

        And it's often worse for bigger companies. Startups can often just rewrite their crap. You can't do that overnight with several years worth of legacy systems if they rely on vendor-specific features.

      • dev_dull 1823 days ago
        I imagine the ratio to be something like 1,000,000,000:1 for death from premature optimization vs vendor lock-in related changes.
        • hinkley 1823 days ago
          It's the Precautionary Principle.

          The odds may be low, but there are vendor lockin scenarios that sink a company fast. Having experienced that once you don't want that sense of powerlessness again.

          The part where I agree with the sentiment in this chain is where people try to avoid this by putting in big abstraction layers of custom crap.

          It's possible to use Law of Demeter principles to keep the vendor details of a lot of subsystems from leaking across your entire app. But that requires architectural skills that are de-emphasized in the Ship Every Week world we live in.

          (I'm not implying causality there, or maybe the causality is in the other direction. These things are hard to learn and harder to teach and if you can't get one thing you try something else.)

      • stickfigure 1823 days ago
        Perhaps this is the "exception that proves the rule"? That situation was unusual enough that one of the Firebase founders reached out and gave them a bunch of credits and helped them work through the issue.

        There's certainly a continuum here. The most minimal risk mitigation strategy is simply to use your own domain (as opposed to the vendor's domain), and that requires nearly zero effort. Unfortunately the team in that article made that mistake, and they're pretty honest about admitting it.

        Assuming you don't make rookie mistakes at the beginning, you always can move to another platform - it's just a question of how much work it will take. Is it better to put that work in up front for the 0.01% chance you'll need it? I think not.

        • fauigerzigerk 1823 days ago
          >Perhaps this is the "exception that proves the rule"?

          The recent 12 x Google Maps API price hike is another such "exception that proves the rule" apparently.

        • dragonwriter 1823 days ago
          > Is it better to put that work in up front for the 0.01% chance you'll need it? I think not.

          Over a sufficiently long time horizon, the chance is much greater than 0.01% and the longer you go without doing the work, the more it costs to do the work.

          Sure, it's more a problem that bites you when you've become a big enterprise, but it bites hard, and some of us work in big enterprises on systems that could reasonably be operational for generations going forward (and perhaps have been going backwards.)

          • stickfigure 1823 days ago
            I would happily accept that problem if skipping the extra work enhances my chance of becoming a big enterprise in the first place.
            • dragonwriter 1823 days ago
              > I would happily accept that problem if skipping the extra work enhances my chance of becoming a big enterprise in the first place.

              And that's the right choice in some cases. But the added up-front effort can be a lot less than the added reengineering effort, and growing into a big enterprise doesn't mean you are immune to disruption by more agile startups.

              And, of course, those who are already in big enterprises have different concerns (while premature adoption of the mitigations at issue can be engineering resume padding in startups, deferring it with the hope that it won't be a crisis until it's someone else's problem in exchange for short-term metrics can just as easily be management resume padding in enterprises.)

              • JamesBarney 1823 days ago
                But how much less work is it do it up front, and continue to maintain and test a multi-provider platform?

                I don't think it's 100x or even 20x less work.

      • musingsole 1823 days ago
        Vendor lock-in at the level of Google, Microsoft or Amazon is not the same as becoming overly reliant on a single small factory down the road providing a critical sensor.
        • regularfry 1823 days ago
          No, it's worse. You're more likely to find investment to buy the small factory if you have to than to buy Google.
      • organsnyder 1823 days ago
        That's a cost increase of $23,700 per year. Certainly not trivial (especially for a startup), but that's 1/5 of the cost of a junior engineer in a low cost-of-living market.
        • rblatz 1823 days ago
          In low cost of living markets jr engineers are making much less than $118,500 in fact they make about half that in median markets.
          • JamesBarney 1823 days ago
            But the all-in cost in terms of administrators, managers, healthcare, mentoring and employment taxes raises that quite a bit, probably closer to the 120k number than the 60k number.
      • scarface74 1823 days ago
        Honestly, any business that bases its infrastructure on Google deserves what it gets. When have you ever read ridiculous price hikes by AWS or Azure?
      • rorykoehler 1823 days ago
        Has more to do with Google than vendor lock-in. They're always up to this kind of backdoor shadiness.
      • JOnAgain 1823 days ago
        If $25/month breaks your startup ....
        • robrtsql 1823 days ago
          I think it's the other $1975 that is breaking their startup.

          Depending on your perspective, that might also be a small amount, but let's not misrepresent the situation.

          • maxxxxx 1823 days ago
            The question is how much you can shave off the 1975 by going to another vendor.
            • acct1771 1823 days ago
              The question should be how much you can save by not tying an anchor you don't control the weight of around your neck in the first place.

              The anchor in the metaphor is using large tech conglomerate without being able to quickly pivot away from them at a moment's notice.

              • spookthesunset 1823 days ago
                > The question should be how much you can save by not tying an anchor you don't control the weight of around your neck in the first place.

                How much did it cost to engineer your product to not "tie an anchor"?

                My guess is, you didn't "save" anything trying to build out several layers of abstraction to deal with a hypothetical risk of getting screwed by "large tech conglomerates".

                • acct1771 1822 days ago
                  Send me an email, subject "Does your product have technical debt?" in 5 years.

                  Spoiler: Answer will be "no".

        • pault 1823 days ago
          Did you even read the comment? A change in firebase pricing increased the cost by almost an order of magnitude. I don't know how it affected the upper tiers but if you imagine a larger startup with much higher traffic, an unexpected price increase like that could break the company.
    • spookthesunset 1823 days ago
      The funny thing about "vendor lock in" that some engineers never consider is by avoiding "vendor lock in", you run a very real risk of getting locked into your own pile of crap instead. I've seen so many examples of homebrew database abstraction layers, homebrew javascript frameworks, homebrew reporting systems, homebrew cloud stacks (if you are luckly... there is also the rack-and-stack crowd), homebrew build systems, etc. All of them have just as high, or higher, switching costs than any paid vendor. All of them are unsupported, largely crappy rip-offs of what you could buy.

      I've seen companies get far more burned by these homebrew monsters than I've ever seen them get burned by "vendor lock in".

      • debt 1823 days ago
        Furthermore, in my experience the usually poorly documented homebrew stuff is almost always worse than whatever crappy legacy but well-documented vendor their locked into.

        If I see some custom framework it’s usually an instant facepalm.

    • dexen 1823 days ago
      To push back a bit, without going into the abstract of portability:

      - performance is a feature

      - reliability is a feature

      - security is a feature

      - and lastly, cost-effectiveness is a feature

      If the app is to be used through wide range of regions, you want geographically close datacenters for low latencies. If the app is to be used for mission-critical purposes, and contain sensitive data, you want high assurance and privacy[0]. And lastly, the cheaper the app is to run and maintain - and that includes not just the infrastructure bill, but also availability of engineers who know the platform well[1] - the better you can develop the same app within the given budget & time.

      [0] there are some legal compliance requirements for handling medical data

      [1] one of the reason Windows apps are rather cheap and plentiful - there's wide availability of engineers who know the platform well. Likewise for certain popular web stacks.

      • jimmychangas 1823 days ago
        Better yet, the things you listed are qualities. They encompass every feature in the product. I think parent is generalizing too much on his assumption that only mature companies should spend time in optimization. Early stage startups can sometimes make mistakes, and optimizing early on can help them save the precious resources needed for them to survive another six months.
        • dexen 1823 days ago
          >They encompass every feature in the product

          That's not a given. It's plenty common to have the most often used features (say, sub-pages, or key processes) optimized the best, with the rarely used ones lingering at low optimization. User behavior is non-linear; small changes around pivotal values can yield great results. Over-optimizing on the other hand yields diminishing returns. Making even just your home page snappy can greatly improve conversions (or whatever is your success metric).

          Other than that you are right that a bit of optimization here or there can save a lot of costs. I distinctly remember certain popular phpBB forum that, while quickly growing the userbase, somehow managed to still improve performance on rather measly hardware - and without much horizontal scaling! - getting in return genuine praise from the users, and also keeping the cost & complexity in check.

          • jimmychangas 1823 days ago
            You are right, thank you for your reply. The "user behavior is non-linear" and "optimization yealds diminishing returns" are powerful insights.
      • JamesBarney 1823 days ago
        But I feel like it's way easier to hire for Azure,AWS, or Google than all three, or the in-house cross provider solution.
    • gambler 1823 days ago
      >Vendor lock-in is something that engineers care about because they like playing with tech.

      Bad engineers care about vendor lock-in, because they want to pad their resumes with abstraction tools.

      Good engineers care about vendor lock-in, because all cloud providers have limits you do not control. Your architecture can become insanely expensive or simply stop working if something you do grows beyond one of those limits. In those cases you can either re-architect the whole thing or simply switch vendors.

      This is one of the hidden costs of "cloud" platforms everyone suffers from but no one talks about.

      • scarface74 1823 days ago
        If we grow so large that we hit the limits of whst AWS can handle, I think we will have enough investor money flowing in to be able to afford rearchitecting.
        • kensey 1823 days ago
          You can easily grow so large that you hit a limit of what a given cloud provider will allow (e.g. API calls or IOPS) at a price that fits into your budget for a task, even if you're not actually especially large. (AWS will give you all the IOPS you can ever use... as long as you have the money to pay.) In that case, being able to pivot to another cloud provider quickly can save your burn rate.

          (Flashback to my ITIL instructor in 2008 talking about Service Catalogs and how they are meant to include even discontinued products. "Will Microsoft support Windows 3.1 for you today? YES! Bring money.")

          • scarface74 1823 days ago
            And you think you can host on prem and support your use cases without any budgetary constraints?
            • regularfry 1823 days ago
              Is "on prem" the same as "another cloud provider"? Or is it irrelevant to the discussion?
              • scarface74 1823 days ago
                If you need resources, either you’re going to have to pay a cloud provider or run them in your own data center or at a colo.
        • regularfry 1823 days ago
          You might not be able to afford the time.
          • scarface74 1823 days ago
            So what are the chances that your business is going to grow large enough that AWS or Azure can handle the requirements? Is your business likely to grow large enough to need more from AWS than Netflix or even Apple?
            • regularfry 1823 days ago
              My business? Close to zero. But:

              > If we grow so large that we hit the limits of whst AWS can handle, I think we will have enough investor money flowing in to be able to afford rearchitecting.

              Apparently you think you're in danger of that happening, so what makes you think you can buy time with money?

              • scarface74 1823 days ago
                I don’t think I will. I’m not the one posting about the hypothetical boogeymen that a business I work for is going to grow so large that a cloud provider can’t handle it. As if some how that same business could build out a data center that could scale better than AWS or Azure.
                • regularfry 1822 days ago
                  > I’m not the one posting about the hypothetical boogeymen that a business I work for is going to grow so large that a cloud provider can’t handle it

                  That's literally what you said in the post I quoted.

                  > As if some how that same business could build out a data center that could scale better than AWS or Azure.

                  You're the only one straw-manning this with on-prem. The whole context of the discussion is around being able to move between cloud providers. Why are you bringing a point into the discussion nobody else thinks relevant?

                  • scarface74 1822 days ago
                    This is the original post I was responding to:

                    You can easily grow so large that you hit a limit of what a given cloud provider will allow (e.g. API calls or IOPS) at a price that fits into your budget for a task, even if you're not actually especially large.

                    My response was that what are the chances that a company will grow large enough that it will outgrow the resources of a cloud provider?

                    And there are really only three cloud providers worth discussing - AWS, Azure and Google. Besides Google (and if you based your business around then you get what you deserve) what cloud provider has raised prices on a whim? Which provider’s cost are so out of line with the other that it is worth the switching costs? Besides, if you’re that large, you’re paying negotiated prices not published prices.

                    In fact one post was specifically about building out specialized infrastructure at a colo.

                    https://news.ycombinator.com/item?id=19731913

                    If you’re so large and you are going through the costs and risks to move off of a cloud provider, why put yourself in the same position again by migrating to another provider?

            • gambler 1823 days ago
              >So what are the chances that your business is going to grow large enough that AWS or Azure can handle the requirements?

              You naively assume that hosting something at AWS or Azure means all of their resources are at your disposal and subject to reasonable pricing. That is not the case. Cloud services are not designed for you. They are designed to make money for whomever actually owns them. They have their own architecture, assumptions and target use cases.

              In reality, the chances that a large company will eventually hit some limitation of AWS or Azure are nearly 100%. The only question is whether it will be something that can be worked around.

              One problem with "the cloud" is that a lot of the limits are no-obvious, and if your use case significantly deviates from whatever the provider had in mind, there might not be an incremental solution.

              Another problem is that linear resource pricing is not always sustainable if you're experiencing exponential growth.

              • scarface74 1823 days ago
                You naively assume that hosting something at AWS or Azure means all of their resources are at your disposal and subject to reasonable pricing

                So if I’m “naively” assuming that you should be able to cite a real world case where a company needed more resources than AWS or Azure is willing to provide.

                “Reasonable pricing” for any supplier that a company uses is a price where the company can pay their suppliers, do something to add value, and make a profit.

                Cloud services are not designed for you. They are designed to make money for whomever actually owns them. They have their own architecture, assumptions and use cases.

                Guess what? All of the suppliers you use for your business are in business to make money. And if your use case doesn’t fit within their managed services offerings, your escape hatch is to bring up a VM and install whatever you need on it.

                In reality, the chances that a large company will eventually hit some limitation of AWS or Azure are nearly 100%

                Where are the case studies?

                The problem with "the cloud" is that a lot of the limits are no-obvious, and if your use case significantly deviates from whatever the provider had in mind, there might not be an incremental solution.

                Every time I log on to AWS, and go to the support page, I see our account based limits. The non account specific limits are publicized. Which specific limits are you referring to?

                As far as working around unknown limits of off the shelf managed services - that’s why companies hire programmers.

                Another problem is that linear resource pricing is not always sustainable if you're experiencing exponential growth.

                And if your marginal revenue is not exceeding your marginal costs, that says more about your business model than anything else.

                • gambler 1823 days ago
                  >So if I’m “naively” assuming that you should be able to cite a real world case where a company needed more resources than AWS or Azure is willing to provide.

                  https://firstround.com/review/the-three-infrastructure-mista...

                  • scarface74 1823 days ago
                    Well. I can tell you that the article is demonstrably wrong.

                    It makes it easy to do things like user identity. Authentication. Queueing. Email. Notifications. Seamless databases. These are all lightweight services that can save you a lot of time, but only if you’re using AWS. The magic (for Amazon) is that they deter people from migrating despite mounting costs for storage and bandwidth.

                    All of those lightweight services have publicly accessible endpoints that can be used in conjunction with self managed infrastructure either over the public internet, a VPN or a direct connect.

                    But, your contention that you quoted was that AWS/Azure “wasn’t willing to provide it”. Yet you posted an article where the cost of providing it wasn’t conducive to their business model.

                    Then I could always post about a little streaming service you might have heard of called Netflix that has all of its infrastructure besides its caching servers (that are colocated at ISPs) on AWS and the reason they decided to move to AWS.

                    https://www.se-radio.net/2014/12/episode-216-adrian-cockcrof...

                    But really what does it say about a company that is growing fast unprofitably? Were they profitable when they were spending $20K a month?

    • schnable 1823 days ago
      I agree somewhat, but vendor lock-in is also something often pushed by engineering and business leadership due to previous experiences of being badly burned by lock-in in the past.

      For FaaS in particular, if designed and factored correctly, I think the cost to move clouds can be minimized without building multi cloud capability upfront.

    • JMTQp8lwXL 1823 days ago
      It can become a real problem if you pick a niche PaaS that say, gets bought out and closes up shop real quick. Probably wouldn't happen if you picked AWS, but I wouldn't consider this "fake work" in all contexts.

      You have evaluate the probability of a risk occurring, and its impact to your business. Small-time service providers get bought out and close shop left and right.

    • rorykoehler 1823 days ago
      Drove me crazy in the past when I got pressured by CFO to look into cutting infra costs which weren't even that high and the company was facing much great technical challenges (which were pre-requisites to even thinking about optimising infra costs). Always wanting me to kick the tyres with this provider or that provider. Unbelievable waste of time and resources to try save a few grand here and there.
    • closeparen 1823 days ago
      Vendor lock-in is something the business starts to care about very deeply once the vendor is in a position to exploit it with pricing.

      Think of it like a backup, only instead of a hard disk failure you are insuring against a vendor’s market power.

    • cryptonector 1822 days ago
      Managing cost is work, real work.
    • slothario 1823 days ago
      It's fake work until your web application reaches a point where server loads take your web API down, and endpoints frequently take well over 10 seconds to respond.

      Yes, I worked at a place where both of those things were true.

      Switching some services out to a service (in this case, Azure functions) saved my company like $10k/mo, too.

      • dexen 1823 days ago
        This is a fine point. And yet both FB and Twitter managed to go through long period of servers being overloaded and fail-prone, to the point of frequent outages - cue the Fail Whale[1] - relatively unscathed, and emerge victorious.

        In the end it's about taking calculated risks, and concentrating on where your effort and capital are most effective.

        Virtually nobody thinks of the Fail Whale as a real risk now a day.

        [1] https://pbs.twimg.com/media/DorDnbAU4AARTc3.jpg

        • lugg 1823 days ago
          Survivorship bias.

          I'd also posit that there isn't much competition at the top.

  • jwiley 1823 days ago
    I like the idea of multi-cloud very much, kudos to the author for pushing for it.

    One crucial issue that isn't addressed by a hexagonal architecture however is data egress charges.

    Data egress is the gravity of the cloud services world, and it's are an insurmountable barrier for many services. The moment you start planning a multi-cloud deploy, and start looking at shifting services that are tightly coupled, or require data replication (databases, logs, storage) between clouds, you're going from low or no cost data charges, to paying egress on -both sides- of every transaction.

    I know first-hand that these costs can quickly out weigh any disaster recovery / eggs in one basket argument you might put forward. It even dominates an "this cloud provider will probably be our competitor in 2 years or is our competitor right now" argument, and my guess is data egress has strongly contributed to companies like Netflix continuing to use AWS.

    • regularfry 1823 days ago
      Once you're big enough you can start talking about having a direct interconnect to the cloud provider, which helps.
  • giancarlostoro 1823 days ago
    I find it funny cause Microsofts serverless infrastructure and co source code is on GitHub it may not be as pretty as a real Azure service but they let you take the damn thing with you. Not sure about other cloud providers. How the tables have changed. Hell I convinced my old boss to selectively use Microsofts Azure Functions because they were open source and we needed a slightly longer running process not part of our typical web service.
    • imglorp 1823 days ago
      I agree, they've done a bang-up job pivoting the company towards driving traffic towards their cloud services.

      That said, they spent so many decades driving towards desktop OS sales that plenty of ruts and lane barriers remain. Those things were designed for lockin in terms of APIs, file formats, and with no interest in portability. Now, they're eroding but still enough to be a major irritant. Eg, good luck using Teams to have an actual meeting cross platform. Eg, dotnet has made progress but portability is still painful.

    • kylek 1823 days ago
      I didn't realize this was a thing and had to look it up.[0][1] This seems pretty great if you were considering Lambda but had concerns over portability.

      [0] https://github.com/Azure/azure-functions-core-tools

      [1] https://medium.com/@raduvunvulea/how-to-run-azure-functions-...

    • skohan 1823 days ago
      But how practical is it for anyone to really spin up cloud infrastructure based on those github repos? I'm skeptical that it's much more than a PR move.
      • giancarlostoro 1823 days ago
        Based on the GitHub issues for Azure Functions, I want to say people are using it. The way Azure Functions is implemented you literally just need to be able to run on IIS and thats IF you even need to run on IIS, I've not investigated it since I initially implemented our Serverless Functions two years back. Azure Functions are an extension / implemented on top of WebJobs iirc which is pretty much usable by anybody with IIS or their own Windows Server.

        Also the build system for Azure is open source, and I'm pretty confident it works. They share the core of the product which gets the Azure UI slapped on top of it, that's what you wont see, but you can see the same UI from the GitHub project on Azure if you really dig in through Azure. I've seen the "crappy UI" after looking further in Azure.

    • quantguy11959 1823 days ago
      How is Azure functions still not a lock in? You can’t do anything with what they’ve open sourced.
  • hota_mazi 1823 days ago
    Vendors have already come up with a way to fight the vendor lock-in dodging that this article recommends: data egress charges.

    Whenever you are trying to take your data out of one cloud into another one, you're going to be charged heavily, and these costs will likely exceed the costs of accepting vendor lock in and keeping your data housed in one location.

    The only realistic way to fight vendor lock in is to keep your code as isolated from proprietary API's as possible (e.g. deploy containers, or use interfaces to isolate proprietary calls in your app).

    • scarface74 1823 days ago
      And while you’re adding the extra complexity just to avoid lock-in, you’re not adding features that can acquire customers or get your existing customers to pay more.

      No, your CTO is no more going to uproot your entire infrastructure from AWS/Azure because you promised that it will be “seamless” than they are going to replace their six figure Oracle installation with Postgres because you used the repository pattern.

  • nailer 1823 days ago
    > This leads us to my favorite architecture for serverless apps: hexagonal architecture, alternatively called ports and adapters. As it’s creator, Alistair Cockburn, explains, the hexagonal architecture allows an application to equally be driven by users, programs, automated test or batch scripts, and to be developed and tested in isolation from its eventual run-time devices and databases.

    https://vacationtracker.io/wp-content/uploads/2019/04/hexago...

    This seems like a new buzzword for an existing (but still very good) thing, abstraction layers.

    The original article (http://alistair.cockburn.us/Hexagonal+architecture) is down though, so it's entirely possible it was was written in the late 90s when these practices started becoming widespread particularly with Java and Gang of Four (https://en.wikipedia.org/wiki/Design_Patterns).

    Edit: dzone says 'Hexagonal Architecture' was from 2005. https://dzone.com/articles/hexagonal-architecture-what-is-it... Make of that what you will.

    • cgarvis 1823 days ago
      His page has been down for years unfortunately. It felt like it was start to again some traction in the ruby community years ago. Uncle Bob talks about in his Lost Years talk. Think that turned into Clean Architecture. Gary Burnhardt refers to it as “imperative shell, functional core”
    • adzicg 1823 days ago
      Hexagonal arch is a very old name, another popular name is “ports and adapters”. Alistair’s article on the C2 Wiki suggests 2005 as the origin of the name (http://wiki.c2.com/?PortsAndAdaptersArchitecture), but that the pattern was identified in the 90s.

      btw, wayback machine has a copy from 2009 of Alistair’s page: https://web.archive.org/web/20090122225311/http://alistair.c...

      • nailer 1823 days ago
        Yep, that's mentioned in the comment you're replying to. 2005 is still a long time after the Adapter pattern became popular https://en.wikipedia.org/wiki/Adapter_pattern.

        I definitely do think one should avoid service provider lockin by using abstraction, I'm just not going to use the awkward sounding name coined by someone who doesn't seem to be adding much to the concept.

  • alexkavon 1823 days ago
    IMO, deciding to go “serverless” or not usually ends up with about the same amount of work as far as writing code and configuration goes.

    Things like “serverless” or firebase data stores or even html hybrid app frameworks, for example, are designed more so for simple proof of concept apps. As soon as you begin any sort of serious pipeline development, deep configuration for special case, scaling, etc. you’ll find these easy to use services and ideas limit your control overall. Vendor lock-in is just a benefit of “configuration-free” systems which is a side effect of people not willing to rtfm.

    • ryanmarsh 1823 days ago
      Serverless guy here. You’re on the right track. I’d just add that it’s more about ops burden than anything else.

      iRobot is a $3bn market cap company with 23 million robots in the wild. Their entire IT estate for managing 23 million robots including the communication and systems is $15k a month. They can see their costs down to the function, they can see where every cent is spent and they do this with 10 engineers, 8 of which are in development, 2 in Ops.

      shrug

      • alexkavon 1823 days ago
        Until a rearchitecture is needed. Then you’ll either need all your developers in ops or vice versa.

        (Assuming your response implies iRobot uses serverless)

    • nilkn 1823 days ago
      I view these serverless cloud offerings as more of a business/staffing decision than an engineering one. It may not be less work overall, but it tends to outsource traditional ops work and replace it with work that ordinary developers can do. Instead of having an extensive in-house ops staff to support your development team, you can have a much smaller ops staff and grow your development team more quickly.
  • yani 1823 days ago
    Is vendor lock really an issue that needs to be solved? I am wondering if one had to ever switch vendors in practice. It sounds like over optimization to something that might not even be in the requirements. I remember a few years ago when infrastructures were designed to be able to handle 1m+ requests per second long before market validation. Keep it simple and focus on getting your app to the market.
    • linuxftw 1823 days ago
      > Is vendor lock really an issue that needs to be solved?

      I don't think so, not at this point. Workloads are more portable today than they ever were, as long as you're not using XaaS from your cloud provider.

      Compute has become entirely commoditized. I think in the next 10 years or so, with lower power/instruction and ubiquitous 1G internet links, it will become cheap enough to start running workloads in-house again.

  • janpot 1823 days ago
    For me, my main problem with "vendor lock-in" is not the switching cost. I'm not planning to switch vendor any time soon. For me it's more about the black-box nature of their solution. It's usually proprietary, closed-source solutions that put you at their complete mercy when things go wrong. It's not debuggable, I can't run it on my laptop, I can't look inside how it works, I can't make changes to it, I can't run my own, slightly modified version of it, etc...
    • skohan 1823 days ago
      Yeah I am currently working on a project which leans pretty heavily on AWS components, and can be very time consuming when something doesn't work the way it should. Even the paid support isn't that great either: a lot of times you end up chatting with someone who provides you the same links you just googled which didn't solve your problem, and more often than not the end result is a ticket being created somewhere, which will be addressed in who-knows-how-long if ever.
    • scarface74 1823 days ago
      Out of all the open source code you use, how much of it have you ever inspected and not treated as a “black box”?

      Most of AWS’s proprietary services have a way that you can simulate running them on your laptop.

      • janpot 1823 days ago
        Many.

        100% of the times I've used one of those local versions, I've ran into incompatibilities with the real service.

        • scarface74 1823 days ago
          Which ones?
          • janpot 1822 days ago
            Not sure what you mean here but OS projects that come to mind that we haven't treated as a black-box at some point are kubernetes, docker, node, chromium, rabbitmq, postgres and every JavaScript module we use. Two local services that come to mind that we have used and that didn't work like the AWS equivalent is dynalite and fake-s3.
            • scarface74 1822 days ago
              I mean have you actually inspected the source code and/or made changes to suit your needs for infrastructure level resources where there is a managed equivalent?
  • village-idiot 1823 days ago
    I still just don’t understand why I want serverless, especially as my main request serving mechanism. Every investigation I’ve done into it revealed a lot of issues around function management and latency that always seemed harder to deal with than just writing a server in <language> and deploying it on ECS or fargate.
    • sgtcodfish 1823 days ago
      My experience with serverless (mostly AWS Lambda) is that I've found 3 major use cases where it's been a very successful choice:

      1. as a cron-style job (e.g. download a file every hour and put it in S3, or connect to a DB and do some smaller processing task) 2. as a responder to (or processor of) cloud-based events (e.g. receiving from a stream, reacting to an instance shutdown notification or an alarm) 3. as a backend for a small REST API (especially for heavily cacheable APIs)

      For all 3 cases, assuming the task isn't hugely inappropriate and you've got a bit of infrastructure-as-code lying around which can be repurposed, serverless has lead to a massive time saving for me for several tasks, for very little money and with basically no maintenance effort required.

      There's definitely a tendency towards smaller tasks, though. Ultimately serverless necessarily means giving up control of your infrastructure and removing a lot of customization or specialization options; that means that at a certain scale or level of complexity, it just isn't an appropriate choice either for cost or performance reasons - but that's fine, it doesn't have to solve all problems. It has its niche, and it's quite easy to go from a quick Lambda to a container-based or VM-based alternative.

      • yani 1823 days ago
        Downloading files is something I would not be using serverless due to the timeouts - 900s
        • sgtcodfish 1823 days ago
          As others have said, if you're gonna be reaching that kind of timeout, the use case would come under this caveat I mentioned:

          > assuming the task isn't hugely inappropriate

          We have a couple of cases where reasonably small files (< 100MB but it'd work with larger) need to be downloaded from one place and placed in another, potentially with an ETag check to prevent redundant uploads/downloads. Lambda is perfect for that.

        • mcintyre1994 1823 days ago
          If you're downloading a file that takes anywhere near that long every hour then it's probably a bad choice for cost reasons too. Most files aren't big enough to take 15 minutes to download though.
        • bmm6o 1823 days ago
          If you're at the point where you're worried about the timeout, you should also be worried about the disk space available to your Lambda. The file should be stored in and served from S3.
          • scarface74 1823 days ago
            You can stream a file directly from a source to S3.
      • slothario 1823 days ago
        I switched a company's web API with background tasks to a serverless architecture. In some ways it's kind of magical, because it means we just need a few web API boxes and a bunch of services which scale automatically.

        However, debugging is a major PITA. If I could go back and do it again, I would be less zealous and only move truly computationally-intensive services to Azure Functions. It turned out one major service I moved to functions could have simply been fixed to use the ORM correctly, and if I had done that it wouldn't have significantly increased the server load.

        Also, do not move a task where ordering matters to function fed by a queue.

        Just some thoughts from a mid-level programmer who was given keys to the kingdom and re-architect everything.

    • notoverthere 1823 days ago
      Serverless backends tend to work quite well when paired with a Single Page Application on the frontend – e.g. Vue.js or React. That way your frontend can be served from a static host – e.g. S3 or GitHub Pages – almost instantly. And so the perceived performance of your application isn't harmed (as much as you'd think) by the latency of your backend, since other aspects of the application's interface can load and continue to be responsive.
      • janfoeh 1823 days ago
        To me, that does not seem to describe anything specific to "serverless"?

        You've always been free to serve your static assets in any way you like, so I'm unclear as to how the way the backend is architected comes into play here.

        • james-mcelwain 1823 days ago
          I think they're suggesting that bad latency from a serverless architecture is mitigated by a SPA, since the application can render and appear functional while it's fetching data.

          Still, not sure how damage control for poor performance is a "pro" of serverless design.

        • jerf 1823 days ago
          Well, there's very little (if anything) that "serverless" can do that other techniques can't accomplish. It's about costs & benefits, not whether or not you can do something.

          Though I tend to agree I'm yet to hear a really compelling description of why I should move very much into it. Some of this may be because I tend to write in a style that makes it fairly easy to mix & match bundles of application functionality anyhow, so to me adding some tiny function to a running app isn't that big a deal. (I don't use Erlang directly, but Erlang is where I learned this from.) If you're in an environment where deploying a single new REST handler or some recurring service is much harder, though, I could see where it comes in handy for certain things.

      • village-idiot 1823 days ago
        I guess. I fail to see how this is better than sticking an API server onto Heroku or similar, especially given the engineering hours spent will easily dwarf any potential hosting cost differences.
      • abacadaba 1823 days ago
        Not sure if any of the serverless offerings are 100% there yet, but I have a hard time not seeing this as the future for most things.

        I just want to quickly deploy code in a friction and maintenance free manner with zero compromises on scalability, latency, flexibility or reliability, what the problem is?

  • fuball63 1823 days ago
    I created bigcgi.com largely in reaction to vendor lockin. I use the CGI standard and open source the whole platform. It allows for serverless apps to run locally, with or without the platform, because any valid CGI binary/script (which communicates via stdin, env vars, and stdout) should run on the platform.
    • dexen 1823 days ago
      Please don't take the following as personal slight.

      Not sure if it's a joke or long term project... because the descriptions, while perfectly true, feel a bit like snark and sneer. "CGI: We are the original pranksters", right as it may be, doesn't exactly feel like an enterprise sales pitch.

      Other than that, and the single tier, tight RAM provision effectively precluding any hosted language like PHP, I applaud the initiative.

      • fuball63 1823 days ago
        No offense taken. I see it as an exercise of tech minimalism and exploring the boundaries of how much "obsolete" tech can really achieve.

        I'm not really interested in enterprise customers (too much pressure and liability). I envision it for students, side projects, and small operations.

        It is also currently in invite only beta while I continue my experiments with the platform. I have been meaning to work with the homepage; I do not want it to seem snarky.

        Thanks for checking out the site, I appreciate the feedback!

  • eeZah7Ux 1823 days ago
    "fighting vendor lock-in" by surrendering your data to one multinational out of 4? Oh the sad, sad irony.
  • newaccoutnas 1823 days ago
    I was hoping to see the Serverless framework and possibly using the OpenFaaS plugin thats in incubation - https://github.com/openfaas-incubator/serverless-openfaas

    Big bad vendor locking is fine in some cases, perhaps smaller shops who don't need multi-cloud or the time investment, but if it's greenfield, then perhaps above worth considering (if not now, when matured)

    • nailer 1823 days ago
      Architect Serverless (https://arc.codes) is currently AWS-only, but also avoids adding anything that's specific to AWS, so in future your .arc file (that does all your API gateway, lambdas, queues, etc) will work on Azure etc.
  • quantguy11959 1823 days ago
    There are a few vendors built on top of multiple cloud vendors however they themselves become the lock in, zeit or Joyent are good examples of this.
  • ryanmarsh 1823 days ago
    Vacation Tracker hasn’t raised much money. What a way to waste what little they’ve raised. I feel like someone should tell their investors.
  • leerob 1823 days ago
    To prevent vendor lock-in and still go serverless, you could look into using a service like Now. Under the hood, it switches between AWS, GCP, and Azure based on whichever edge is closest.

    https://zeit.co/now

    • penagwin 1823 days ago
      Does Now support self-hosting? If not then you just vendor locked yourself to Zeit :P
  • chapium 1823 days ago
    In summary:

    If you depend on cloud services, a good idea would be to write tests when planning the system so you can easily switch vendors later if the business relationship sours. Poster thinks "hexagonal architecture" is flexible enough to do this.

  • t0astbread 1822 days ago
    What if your main concern isn't switching cost but rather giving your users the liberty to run their own instance of the service on a provider of their choice?
  • js4ever 1823 days ago
    Website is down, not serverless I guess :)
    • slobodan_ 1823 days ago
      It's up again. It's WP, and not serverless unfortunately haha