20 comments

  • mxstbr 1038 days ago
    Hey HN,

    When I built my last startup, Spectrum, we spent months building custom caching for our GraphQL API from scratch. It never worked well enough to alleviate our scaling troubles as we could only cache data for unauthenticated users since we had no invalidation. (since we open sourced it all before GitHub acquired us, you can even read through my terrible code[0])

    When Tim told me he had built a prototype of a CDN specifically for caching GraphQL query results with proper invalidation, my first thought was: "Finally!" Not only had he made it possible to cache POST requests (which GraphQL requests usually are), he had made it possible to purge cached query results per specific GraphQL object. For example, when a user edits their name the API can call a purgeUser(id: $ID) mutation and any cached query result that contains that user's data is invalidated.

    GraphCDN is based on Fastly Compute@Edge under the hood, which is really the main reason we were able to spin this up so quickly. Huge shoutout to the folks building that!

    We'll be around all day to answer any questions you have about GraphCDN — ask us anything!

    [0]: https://github.com/withspectrum/spectrum/blob/alpha/api/apol...

    • latch 1037 days ago
      > We spent months building custom caching for our GraphQL API from scratch

      Could you not abandon GraphQL? Returning non-customized responses, while taking more bandwidth, is much more cache friendly for the client, proxies and particularly servers - where you can do simple yet powerful things (e.g. version-based invalidation and pre-generated payloads).

      Like, I went to https://spectrum.chat/explore and clicked on "Tech. It uses GraphQL to load the communities.

      Why not just hit /v1/communities?category=tech

      which would loosely translate into one of:

      -- if you want to serialize the object on each get

      select * from communities where tags @> array['tech']

      -- if you can look up the communities in a cache by id

      select id from communities where tags @> array['tech']

      -- if you have a high read / write and can just pre-serialize the payload and then glue together the response

      select summary from communities where tags @> array['tech']

      • timsuchanek 1037 days ago
        I think it depends what you want as your cache invalidation policy. Simple TTL-based cache invalidation wouldn't necessarily need GraphCDN. However, no matter if REST, GraphQL or any other protocol - making sure, that when content changed, to invalidate it properly on the edge is not trivial.

        If I understand correctly, you're suggesting to cache on a data-layer level (below the app in the stack) instead of what we do - above the app in the stack - just in front of the client.

        That is also a valid approach. It'll need more custom code in your application and has the disadvantage, that your origin will still be hit on every request, while we fully cache on the edge - cached queries won't hit your server anymore and will take load off origin and have the minimum latency possible.

        • latch 1037 days ago
          I'm suggesting that you can cache both at the data layer and "above the app". I was just highlighting one particularly powerful patterns available if you're having trouble scaling (like my parent said).

          > making sure, that when content changed, to invalidate it properly on the edge is not trivial

          The trivial way to do this is simply not to invalidate. Include a version in the cache key. When content changes, the version increases, and clients get new versions. (I think this is sometimes called lazy cache invalidation). It works well with LRU caches. It's still a hit for the top-level keys, but you avoid all the heavy data load and rendering.

          Granted, that doesn't solve every case, but I'm not sure it's fair to say that it'll take more custom code when the initial approach took "months building custom caching" that never really worked. Also, I think you'll find your origin under less load due to having fewer variants.

          • timsuchanek 1037 days ago
            That makes sense! I like the versioning approach - of course requires apps to have a version for their entities, but if they have, that's indeed a great way to leverage it. And yes, that also sounds like a solution that wouldn't take months to build.
      • ako 1037 days ago
        Or use something like Odata that solves the same problems as GraphQL (returning complete and tailored trees of objects in one go), but uses regular GET operation to it's still compatible with normal http caching?
    • ivanvanderbyl 1037 days ago
      Max & Tim, thanks for building this. This is something I recently spent 2 weeks trying to solve, then more pressing issues came up so we threw more CPUs at the problem. So from that I can appreciate how complex this is to solve, and the UX is awesome, so cool that you can edit the cache keys in the UI.

      Do you/will you handle caching individual request layers? For example if I need part of my query to always be fresh, but some expensive part can be cached for hours, is this possible? And somewhat related, what about keying on operation variables?

      • mxstbr 1037 days ago
        Thank you for the nice words, glad to hear you like GraphCDN.

        We don't currently cache partial queries / request layers. For now, I would recommend splitting the query into two requests — one that loads the uncached data and one for the cached data.

        We're definitely thinking about this exact use case though, stay tuned!

        • graphql 1037 days ago
          Ah yes, the devil in the details.

          Partial query caching appears to be the unicorn we all would like to chase, capture, and ultimately study.

          GraphQL is powerful, But with great power, comes great responsibility, and it is too easy to dig your own grave if blindly jumping in with it.

          I wonder if we can learn a thing or two, or just flat out steal, some of the concepts used in flagship relational databases. For example, how Postgres has a query optimizer tucked away secretly under the hood that attempts to alter queries to be more efficient.

    • 0xy 1038 days ago
      This is a super interesting project and thanks for posting it.

      I'm just wondering why you decided on Fastly over Cloudflare Workers or AWS Lambda Edge for this?

      • timsuchanek 1037 days ago
        Good question! These are the main reasons:

        Fastly has much faster cache purging - it can purge any content globally in about 150ms. Purging is one of THE crucial parts which make GraphCDN work. With such a fast purging, we can deliver Read-after-write consistency . In our tests with Cloudflare that took a few seconds.

        Additionally, in our tests, Fastly was in general just a bit faster.

        While we're also JavaScript and V8 fans, our edge layer on Fastly is written in Rust - which is a pleasure to use, especially for a product as ours. Fastly runs it as WASM on the edge - they can load the worker in about 40 microseconds.

        Fastly Compute@Edge is still in limited availability, but if you get a chance, I highly recommend checking it out!

        • kentonv 1037 days ago
          > it can purge any content globally in about 150ms

          Hmm, how can that be? Light in a straight-line fiber optic cable would take 200ms to traverse a greater circle around the world. Add some bends, relays, routers, not to mention servers, and it'll only get longer...

          Are you sure the content is really being purged globally in that time, and not just from your local point of presence?

          (Disclosure: I'm an engineer on Cloudflare Workers. I have no idea how Fastly does purges. Just trying to understand what you mean here...)

          • timsuchanek 1033 days ago
            What do you mean by greater circle? The circumference of the earth is 40,075,000 m. Speed of light is 299,792,458 ms. So we're talking about 40,075,000 / 299,792,458 = 0.1337 -> 134ms to get around the earth once. As the earth is an ellipsoid that is mostly perfectly round, you can assume, that from any point A to B you'll just need to travel half of that, so we're at 67ms for light speed.

            Looking at the 150ms now, which is about 2.3 times the 67ms, this kinda seems reasonable.

            And yes, this is global purging - I just reconfirmed it with someone from Fastly.

            • kentonv 1022 days ago
              That's the speed of light in a vacuum. The speed of light in fiber optic is slower.

              Travelling half of the distance is not good enough, because if you haven't received _confirmation_ of the purge, then you can't trust that it really happened. There could be network errors, etc.

              Sorry but it's not physically possible to do a global purge in 150ms.

        • the_mitsuhiko 1037 days ago
          > we can deliver Read-after-write consistency

          Based on the comment on max below that's not read-after-write consistency but it just becomes eventually consistent but ideally within ~150ms. What would be the consequences of bypassing the cache for read after write, does that break your model in any way? (I guess at the very least you will lose your statistics feature)

          • timsuchanek 1037 days ago
            In all mutations you run through GraphCDN, we wait until the purging is done (as you said, ideally ~150ms). So that means at the point the mutation result comes back to the client, all other queries will get the new content from then on.

            You can bypass the cache if you want to, but it's not necessary. As it would just be like a Cache MISS, that would totally work.

            • the_mitsuhiko 1037 days ago
              I suppose that means writes slow down with the number of dependencies/ dependent caches? How does one estimate the added cost of that write?
              • timsuchanek 1037 days ago
                Well, that depends on the purging characteristics of Fastly - this blog post is quite nice to read https://www.fastly.com/blog/building-fast-and-reliable-purgi...

                We have a limit - you can only "tag" up to 1k items within one query. If you have more items in there, we can't purge all of them.

                Due to the nature of the broadcast protocol of Fastlys purging implementation, I assume that it scales quite well - they have really big customers using this for a few years already.

                • the_mitsuhiko 1037 days ago
                  There is a big difference if you're using fastly's cache purging for simple cache invalidation without consistency requirements or if you are trying to do something like you're building.

                  We're using Fastly ourselves and I was not aware up until today (and in fact I'm taking your word for it, since I can't find it in the docs) that fastly provides consistency guarantees for purges.

        • gnz00 1037 days ago
          Man, we had this exact use case and a similar solution designed but we weren't able to get access to Fastly's new edge compute.
          • mxstbr 1037 days ago
            Now you can just use GraphCDN and don't have to build it yourself! ;)
            • gnz00 1037 days ago
              This was at a fairly large enterprise that utilized Fastly, and has since switched to Akamai. We had many meetings with our AM and their architects to get access to their compute engine over multiple years but never could get them to open the doors. If I could migrate our GraphQL services to a new domain, I'd certainly try it out!
    • pbowyer 1037 days ago
      Good work!

      What was it about this problem that made you need Compute@Edge rather than the standard Fastly/Varnish VCL-backed caching? If you tried that route, how far were you able to get with Varnish before needing the new Compute@Edge product?

      • timsuchanek 1037 days ago
        Good question pbowyer! We actually do a bunch of GraphQL parsing / calculations in the Compute@Edge layer. So without that, we would not be able to provide our current features.

        We also support a feature we call "scopes" - where we both support cookies and headers in the "Vary" header so you can cache user-specific data only for the user with a specific "Authorization" header for example.

        In order to support scopes with cookies, we needed Compute@Edge and could not make that work with VCL on its own. But believe me - we tried. Our first version was actually mostly built on top of VCL.

      • gnz00 1037 days ago
        It's been awhile but IIRC you can't inspect the body on a POST request using Varnish. Either that or any real compute in Varnish gets super gnarly. I also think Fastly has a distributed K/V for state storage that you can access if you're using their new compute.
        • timsuchanek 1037 days ago
          That is correct, you need to first transform the POST request into something Fastly can cache. They don't have a K/V Store besides the edge dictionaries with max 8kb value size yet, but I've heard something is coming...
          • gnz00 1037 days ago
            Ah cool, thanks for the update. I haven't looked at their stuff in over a year. Congrats on the launch, looks really slick.
    • agmontpetit 1037 days ago
      Fastly has have trouble caching POST requests because Clustering [0] doesn't work with POST requests (AFAIK there is not workaround).

      Does Compute@Edge have this limitation?

      [0]: https://developer.fastly.com/learning/vcl/clustering/

      • timsuchanek 1037 days ago
        Under the hood, we're turning the POST requests into get requests, so that limitation shouldn't affect us. As far as I know, underlying C@E behaves like the one used in a VCL service.
        • hermanradtke 1037 days ago
          What happens when the query string gets too long?
          • timsuchanek 1037 days ago
            The question that had to be asked :D First of all, with the current query string size, you can already send quite big queries (8kb), which should suffice for many use-cases. The introspection query, already quite huge for reference is 1kb. For the ones who need more - splitting the values into `Vary` headers is an option to further increase it. At some point - around 50kb there just is a hard limit. If you then want to send an even bigger query - we have persisted queries on our roadmap. Then you'd just send a hash.
    • eurasiantiger 1037 days ago
      How does the invalidation work if data changes independently in the backend, i.e. without using mutations?
    • nivertech 1037 days ago
      How GraphQL subscriptions are handled?

      Are they supported?

      Do they count as a single or multiple requests for pricing purposes?

      • timsuchanek 1037 days ago
        As of now, GraphCDN only caches queries. Subscriptions are just passed through to the origin - a normal WS connection is built. One WS connection counts as one request.
    • cpursley 1037 days ago
      Really interesting. Can this work with Hasura?
      • timsuchanek 1037 days ago
        Yes, it does! We have more and more people who want a more powerful cache setup than what Hasura offers or are just self-hosting Hasura. Both self-hosted and Hasura cloud work!
        • cpursley 1037 days ago
          Shut up and take my money! This will save me a bunch of time from my original plan of rolling a distributed Elixir proxy caching cluster to put in front of Hasura.

          I'd love to see a write up/blog post for working specifically with Hasura and their auth system. Would probably be helpful for seo purposes as well.

          • mxstbr 1037 days ago
            Absolutely, we'll work on that. Let us know if you have any questions or run into any trouble at support@graphcdn.io!
  • timsuchanek 1038 days ago
    Hey HN,

    It's a pleasure to share this announcement with you! In all the GraphQL projects I worked on, it was always a pain to get the caching and security right.

    Instead of you all spending time on building your own caching and security solutions, you can check out GraphCDN! It has powerful caching with invalidation in 150ms all around the planet. Powerful analytics showing you on a Query-level, how fast your queries are.

    We're super grateful to be able to announce this today - ask us anything!

  • andrewingram 1037 days ago
    I asked Max in private, but i'll ask again here for visibility.

    How do you handle smart invalidation? Or more specifically, how do I trust that you're handling smart invalidation _correctly_. Looking at the site, it indicates that calling a mutation like `editUser(id: 5)` would presumably invalidate the User type record with ID 5.

    But how do you actually do this reliably? There's nothing in the spec that would indicate the argument ID maps to a record of a certain type with an id field of the same value. Max indicated that you make assumptions based on the return type of the mutation, e.g. editUser has a return type of User, therefore you can infer the relationship. This might be _generally_ true, but it's not 100% reliably true. Additionally, my mutations _never_ just return the naked entity type like this, there's always a wrapping payload type (philosophically, the mutation payload should contain points to all the parts of the graph that _may_ have changed as a result of the operation). Editing a User doesn't _just_ edit a User, the effects on the graph can propagate far and wide. Another point here is that my mutations are rarely just CRUD operations, but more CQRS in nature, they're built to support a specific system capability rather than allowing generic write operations.

    The problem of smart invalidation seems to have the exact same shape as smart store invalidation/updates after a mutation in the client. Even after all these years, Relay only does fairly superficial automatic updates, you nearly always have to use custom updaters (or client-side directives in some cases) to get the client-side store back in sync after anything but the most trivial mutations.

    • mxstbr 1037 days ago
      Great questions Andy!

      We do support wrapping payload types of any kind because we invalidate _all_ objects returned from a mutation. For example, if you run a mutation like editUser { user { id posts { id } } } we will invalidate any cached query result that contains that user and any cached query result that contains any of those posts!

      You are right that smart invalidation can never be 100% reliable, which is why we have the Purging API to manually purge records you know changed from your backend. I think most customers are going to use the manual Purging API, however we also have a bunch of customers with use cases for whom the smart invalidation suffices.

      • jorams 1037 days ago
        I guess that means a mutation like `editUser { user { name } }` would not trigger smart invalidation, since there is no ID to work with?
        • mxstbr 1037 days ago
          That's correct, however we also allow defining custom "Key fields", which is how you can tell us which fields are unique and should be invalidated by (e.g. "User.email").
      • nivertech 1037 days ago
        Would UUIDs or Relay-style global IDs help to avoid manual purging here?
        • mxstbr 1037 days ago
          Any kind of unique ID works, whether UUIDs, global IDs, incremented IDs, etc.
    • habibur 1037 days ago
      Rather the API client calls the invalidation methods on the server to invalidate cache. The server doesn't do it all automatically.

      As far as I have understood.

      • timsuchanek 1037 days ago
        That sounds correct! These are ways of purging we support today: 1. Automatic through mutations (if we can detect) 2. A purging api (GraphQL) - you can e.g. `purgeUser(id: 5)` 3. The usual Max age + SWR based invalidation

        You can check out some more examples here: https://graphcdn.io/docs/cache-purging

  • doteka 1037 days ago
    Pardon me for asking a perhaps naive question, but how is this not a solution to a self-inflicted problem? Using a normal restful HTTP api has none of these issues. What is the big problem that GraphQL solves, and is it really serious enough to reinvent existing infrastructure for it?
    • timsuchanek 1037 days ago
      That is not a naive, but very good question. Traditionally, many people assume, that GraphQL itself is hard to cache, while REST isn't, because REST can leverage HTTP's power, while GraphQL with its POST requests can't.

      However, that is missing the point. What we need to talk about first is, how you want to invalidate your cache. Do you want to set a TTL of 60 seconds? That might work for certain apps - both in REST and GraphQL, but many apps can't afford stale content for such an amount of time.

      You'll need cache invalidation when content changes. That on its own is a hard problem, no matter if REST, GraphQL or any other protocol. And it is one of the main reasons we built GraphCDN: Making it easy to purge the cache, when relevant content changed. How? We give you a purging api (also GraphQL) and additionally GraphQL has the concepts of mutations. Once you run a mutation through GraphCDN, it'll detect the relevant entities involved and purge the cache accordingly.

      So - yes, in GraphQL caching on the surface might seem harder - but we're not just solving the "I can't cache POST requests" problem, but rather give you powerful cache purging - which is only possible due to the well-defined structure of a typed GraphQL Schema.

      Because of that, we're actually thinking of providing REST "connectors" one day - turning REST into GraphQL, so you can have one unified interface that is easy to cache and invalidate.

      • pier25 1037 days ago
        > You'll need cache invalidation when content changes

        Or you can use Vercel's stale-while-revalidate which will update the cache periodically while (temporarily) serving stale responses.

        • timsuchanek 1037 days ago
          Yes, which we btw also offer. However, it's just one way to invalidate. The first request after the max age expired will still be stale, even if new content is refetched within the swr time frame - in certain applications not acceptable.

          Most of our customers even use both things together to reduce the likelyhood for stale content.

    • philplckthun 1037 days ago
      There are solutions to simply turn GraphQL requests into (traditional) CDN-cacheable requests. Usually this would be done using (Automatic) Persisted Queries, where a query is not only requested as a GET request, if it's not a mutation, but it'd also do so using a hash rather than an entire query.

      This has a couple of limitations that you'd also expect from a CDN cache for REST requests. However, I believe the interesting part about GraphCDN is that it can do more to look at the exact queries and mutations that are run to invalidate queries more precisely.

      So, it's likely worth saying that it's not that CDN caching GraphQL is hard, but getting invalidation and a high cache hit rate (just as with REST APIs) is hard.

    • pm90 1037 days ago
      Having worked on a few backend teams, graphql solves the problem of decoupling frontend from backend. In theory, a well designed rest API should solve this. In practice, frontend teams want all sorts of metadata, want to change the shape of the data they’re fetching etc. Graphql let’s them do this without waiting on the backend team to make changes to the API.

      The frontend teams may not want direct access to a backend teams database. But they do want the backend to be flexible, and graphql allows for that.

    • coffeedoughnuts 1037 days ago
      > Using a normal restful HTTP api has none of these issues

      I don't think a normal HTTP api provides anything to help with cache invalidation after a write operation, does it?

      • simplyinfinity 1037 days ago
        https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ET...

        Etags can be used to check and invalidate content.

        • timsuchanek 1037 days ago
          Yes, but as long as a cached version is still returned at the edge (while the real content might have changed), the ETag won't help. It can only save you from unnecessarily downloading content you already have - in case it didn't change.
          • simplyinfinity 1037 days ago
            And when the content is updated you return different etag and the content is refreshed. Your system should have either distributed cache or a way for the edge to detect cache invalidations or for you to manually invalidate the cache on the edge.
            • timsuchanek 1037 days ago
              Exactly, that is what we provide. Including Etag support btw.
      • doteka 1037 days ago
        As other commenters have mentioned, in fact it has functionality specific for this purpose. That is part of the underlying reason for my question: it often feels like the people pushing for graphql adoption are not aware what http is capable of out of the box.
      • timsuchanek 1037 days ago
        +1
    • pier25 1037 days ago
      This is my impression of the whole GraphQL thing as well.

      It solves some problems on the querying side (that not everyone has). OTOH implementing it server-side is a major pain unless you rely on third party stuff like Hasura or GraphCDN.

      Personally I'll keep using REST as the default for the foreseeable future, and only use GraphQL when the problems it solves are more painful than the problems it introduces.

      • handrous 1037 days ago
        > It solves some problems on the querying side (that not everyone has). OTOH implementing it server-side is a major pain unless you rely on third party stuff like Hasura or GraphCDN.

        I'm not even sure "major pain" suffices to describe it. It's such a mine-field to implement that if it's tractable and non-insane to do so for one's project, then the surface area of one's API must have been so tiny that using GraphQL was entirely unwarranted in the first place.

        [EDIT] I take that back: it can be fairly easy if your dataset is 100% public, read-only, and you don't care whatsoever about performance or limiting abuse.

  • pistoriusp 1038 days ago
    This is such a great idea!

    There are a lot of plays in this space that try to move the database, or serverless-functions closer to the end-users, but in all likeliness if you're already building a single-page-app the static content is already on a CDN and close to your customers, so this gives you a very easy way to increase performance dramatically without having to modify your infrastructure.

    Awesome!

    • timsuchanek 1038 days ago
      Thanks Peter! Yes, we also think that there is a lot of exciting stuff happening in the space. However, we built GraphCDN because we think it's a very practical approach _today_. I think it will take a couple of years until edge compute becomes mainstream. You need your whole database etc on the edge, otherwise it doesn't make sense.

      So we're happy about this level of abstraction, because any app can use it _today_!

  • hcentelles 1037 days ago
    I’ve being waiting for a product like this for some time now, I think there is a huge (not yet served) market for this. I’ve tried to implement something using Cloudflare workers, but failed, also tried to use Apollo Cloud trough a Apollo Federation server in front of my (non Apollo Server) API, failed too.

    Some questions:

    How it compares with Apollo Cloud on feature set terms?

    My graphql server load is like 20 request/s average. At first the pricing looks a little bit intimidating for me, but running the numbers it looks like $500/m, is that right? Hopefully it will offset some of my origin servers costs.

    What count as a request? Just request coming from the “outside” or also calls to purge for example?

    I’ll be trying GraphCDN soon, maybe even today.

    Good luck

    • timsuchanek 1037 days ago
      Thanks @hcentelles, that's great to hear and gives us validation that there is a need!

      Compared to Apollo Cloud: We're mostly focused on the caching part right now and have a different architecture where we are in your stack. Apollo runs a sidecar next to your application. We are a proxy in front of your API.

      When it comes to the analytics part - which Apollo rather calls metrics, I think Apollo gives you field-level information, while we for now just have query-level information. However, we are fully server agnostic - you don't need to use Apollo Server. Any GraphQL API works. You just need to switch the URL in your clients. We even have customers just using the analytics part for now and disabling the caching in the beginning.

      For the pricing: That is correct - you'd have about 50mio requests a month, so $500. However, the pricing there is not set in stone and we're happy to give you an early discount. Just contact us at support@graphcdn.io.

      Right now only outside requests count as a request, no matter if cached or not. Purging calls might also count in the future.

    • 0xy 1037 days ago
      Apollo Cloud pricing is absurd at the enterprise level. I worked at two large companies who inquired and both balked.

      It was cheaper to build our own solution with plugins than it was to use their solution.

      • timsuchanek 1037 days ago
        Interesting to know. For enterprises we even go down with the price per million requests, as the volume is much higher and therefore the enterprise pays enough already.
  • kenrose 1037 days ago
    Two hard problems solved:

    1. Cache invalidation 2. Decent project name

    Joking aside, this is great. Traditional CDNs + GraphQL always felt like an impedance mismatch.

    • timsuchanek 1037 days ago
      Thanks a lot kenrose! We promise to not dare tackling the 3rd big problem in computer science... 2 are enough for now. And indeed - current CDNs (as powerful and great as they are) are not at all equipped to deal with GraphQL.
  • sergiotapia 1038 days ago
    Do you have support for HIPAA/SOC2 companies? We have PII/PHI and would love to use this.
    • mxstbr 1038 days ago
      We don't currently have the certifications, but they are high up on our roadmap! Send me an email about this to max@graphcdn.io and I will ping you once we are there.
  • joshenders 1037 days ago
    Critical feedback: This seems like a product with an exceptionally shallow moat. What is stopping Cloudflare and Fastly from cloning and burying?
    • timsuchanek 1037 days ago
      On the first sight, the moat indeed is shallow. We have a great contact with Cloudflare and Fastly and exchanged our thoughts about exactly this already.

      One of them actually recently looked into building their own solution. However, they realized, that in order to create something really valuable, you need at least a couple of months of dedicated engineering efforts of people who really understand GraphQL.

      Our automatic and powerful cache invalidation - currently only possible with Fastly, a whole GraphQL-specific Analytics solution and a Security suite is nothing you can quickly clone.

      Anyone of course can, but you need a highly GraphQL-specific product with many workflows like CLI workflows to upload your GraphQL Schema etc - it requires quite a bit of thinking and engineering to make that work.

      • joshenders 1036 days ago
        Let me start off by saying, I wish you guys the best and hope my cynicism isn’t taken the wrong way here but reading between the lines… my sense is that there aren’t enough $NET or $FSLY customers asking for this and both companies aren’t interested in staffing a team to dedicate to GraphQL as a result but ARE willing to let you and your specialized team prototype it on their behalf. Not a bad dovetail.

        IMO if you get traction, you would be wise to take an early acquisition offer from Fastly or Cf. I was an early double digits engineer at Cloudflare and I can tell you, you REALLY don’t want to build a CDN to try to compete. Love them to death but look at ImgIX as a case study of how not to bizdev this same business model.

        Also, don’t entertain the idea that you can compete with a DIY VM-based “Cloud CDN”, it’s really not comparable. Look at the many dead “mobile first” CDN startups of 2014-2018, as case studies.

        Lastly, if you guys create a caching query planner on the edge, you might have a head start on a completely unmatched (afaik) product as an intelligent graphQL “gslb”. As a customer, if I can move my user data geographically closer to my users, that’s a big win for many reasons.

        Best of luck!

  • 0xy 1037 days ago
    I don't see any mention of support for subscriptions on your website. Is that currently supported or is it on the roadmap?

    Also to seed an idea for you, it'd be great if you were somehow able to provide subscriptions dynamically based upon queries and mutations being performed.

    Acting as the middleman, you can see the freshest data and so therefore know when something updates. If I could hook that up to my existing GraphQL API and not have to worry about eventing and subscription services for every single object that would be a huge value add for me.

    • mxstbr 1037 days ago
      We support subscriptions, meaning we pass through "Upgrade" requests to your origin and they will keep working as they were before!

      "Automatic subscriptions" is definitely something we've considered offering, but isn't on the immediate roadmap for now as we want to focus on the "peace of mind" first before expanding from there.

  • tcmb 1038 days ago
    Sounds cool! I‘d be interested in the caching and analytics part, but without the CDN, i.e. for an internal GraphQL server. Is there a way to do that?
    • mxstbr 1038 days ago
      We are based on Fastly Compute@Edge, so we can't currently offer an on-prem solution unfortunately.
  • nojvek 1037 days ago
    I've always felt that there should be a cloud hosted object database that should be query-able via graphql. It should offer auth and access rules like firebase.

    Really don't want to maintain a server. I love firebase for that reason, although querying it is a pain sometimes. Would love to make graphql queries so I can fetch multiple things in a single call.

    • unraveller 1031 days ago
      Deepr looks promising https://github.com/deeprjs/deepr you'd have to combine it with fauna for auth/RBAC etc

      the parallel "||" calls (push or pull) are great and the power to switch off a call you send with ? after the function name means all your calls during dev can be in a single text page and you just switch them on/off at will.

  • benjamoon 1038 days ago
    Looks really good, well done! Just fyi, I noticed my phone (new iPhone) got really hot when looking at your site. I’ve had this before on sites and it’s usually a bug or a really tight loop somewhere that’s causing high cpu usage (maybe the animations). Sometimes it’s not high enough to notice on a laptop or full pc, but it’s enough to warm up a phone.
    • timsuchanek 1037 days ago
      Thanks a lot benjamoon ! It probably is the main animation on the landing page. We'll look into it - maybe we'll just disable the animation on mobile.
    • deergomoo 1037 days ago
      Safari has a long-standing bug where animated SVGs chew up CPU cycles like they're going out of fashion. Happens on desktop too.
  • blorenz 1037 days ago
    GraphCDN seemingly is an innovative service I'll be checking out, though I just wanted to comment that the app is a beautiful example of the use of Tailwind. I was a little taken aback because I was presuming styled-components but was delighted to see the well-executed use of one of my favorite libraries!
    • timsuchanek 1037 days ago
      Thanks blorenz! When working with Max, I was surprised how little he cared which library we use to get the job done. In fact, he's also a big fan of it. [0]

      To be fair, we use the Tailwind system, but a lot of customizations on top.

      [0] https://mxstbr.com/thoughts/tailwind/

  • nojvek 1037 days ago
    Q: Is the CDN cache consistent? i.e if I delete or change an object, does it immediately reflect across all the POPs across the globe?

    If so what are the potential read/write/latency speeds we can expect per object?

  • brotzky 1037 days ago
    This looks really awesome. Congrats on shipping to the founders.
  • HaD_XIII 1037 days ago
    Can't wait to see which additional use cases you'll add for GraphQL users in the future for additional peace of mind!
  • the_mitsuhiko 1037 days ago
    How does it deal with non yet propagated purges or are you always at risk of reading stale data for a short period?
    • mxstbr 1037 days ago
      We have plans to investigate adding global strong consistency in the future but we/Fastly don't currently support that!

      The purging takes ~150ms in the same datacenter that the mutation passes through, how long it takes globally depends on Fastly's bimodal multicast system — they've written a fascinating article about it that I'd highly recommend reading: https://www.fastly.com/blog/building-fast-and-reliable-purgi...

  • michaelmior 1037 days ago
    @mxstbr Looks cool! Small typo

    > our 58 data centers worlwide

    *worldwide

    • timsuchanek 1037 days ago
      Thanks Michael, we just fixed it! In a few min it'll be deployed.