Ask HN: Do you use automated tools to create APIs, or do you code them manually?

Do you use some sort of framework/tool for creating the APIs needed for your product/service/application/ecc, for example (https://loopback.io/), or do you code by hand?

165 points | by highhedgehog 1761 days ago

35 comments

  • sunir 1761 days ago
    APIs are interfaces (it’s right in the name!) and should never be directly tied to implementation because:

    1. the interfaces must remain stable to the outside world that relies on them

    2. They select what underlying resources and functionality is accessible by outside users, and what is hidden. A lot of your internal implementation is either a mess, “temporary”, insecure or intentionally internal.

    3. They control access to the internal application through authentication, authorization, security, and translating data in both directions.

    4. When the internal representation changes, they map the new implementation to the old interface to ensure the system remains reliable to API consumers.

    5. They offer migration paths when change is necessary

    That being said...

    Auto API generators are really useful for internal systems where you control the underlying system, the API, and all systems relying on the API.

    They are also useful to build an initial API that you plan to fork.

    • 013a 1761 days ago
      Yeah I agree with your stance, but not the conclusion. The way that gRPC (and many other systems) handle this is beautiful and the way all APIs should be built: your API is a specification, not code, so you start with the spec (SDL), then generate the adapters your implementation needs to plug into it.

      This helps elevate changes to the API itself; you can easily write automated systems which detect changes to the specific SDL files. Or, the way companies like Namely [1] do it, keep those SDLs inside a separate repo, then publish the adapter libraries on private npm/etc to be consumed by your implementation.

      [1] https://medium.com/namely-labs/how-we-build-grpc-services-at...

      • remote_phone 1761 days ago
        This has been around since the early 90s or previous. ONC-RPCs did this where you define the interface file and it generates the client and server stubs for you.

        NFS is based on this, as with other services. Conceptually it’s exactly the same, with some underlying differences.

        • majewsky 1761 days ago
          Going even further back in history, ASN.1 is also like this. It's a description language for data structures, and there are separate representations that can be derived from them. It's sort of like JSON, JSON Schema and Protobuf in one.

          TLS certs are encoded in ASN.1 DER, for instance, and LDAP messages are encoded in ASN.1 BER.

      • sunir 1761 days ago
        I was reading the the OP's question as generating the API from the implementation. Your point about generating the implementation (i.e. a Proxy interface) from the API spec is right on.

        Thanks for the Namely case study as well. It was timely reading. :)

    • bsaul 1761 days ago
      Completely agree.

      I like to design systems in that order :

      1. my data model ( database schema ), because it gives you good questions to ask regarding the business side of your problem and let you go very deep just by asking « for each a, how many bs can we get »

      2. my external api. Because it requires you to « dumb down » your problem, and see which part of your model you want to expose and how.

      3. I actually start coding my business process, and then bind them to the model on one end and api on the other

      If i were to use code generation tool, it would need to generate both the db and the api stubs, together with the correct information exposure. I’m not aware of any tool that would let you do that.

      • encima 1761 days ago
        OpenAPI and RAML both have some pretty cool tools to generate both the DB and the API stubs.

        For OpenAPI, Connexion by Zalando is one of the best implementations I have used. You just need to write the logic and provide the API spec.

        • heavenlyblue 1761 days ago
          So I assume you do most of the serialization of deserialization of objects by hand in connexion?
          • encima 1760 days ago
            Mostly, yeah. It is not perfect but we have a helper library we have written that handles most CRUD operations for objects and Connexion does the validation.
    • bottled_poe 1761 days ago
      Respectfully, I somewhat disagree. APIs are indeed interfaces which should be seen as specifications and should not change. The problem is assuming that the API specification would be generated from the API implementation. The dependency is pointing the wrong way.
      • sunir 1761 days ago
        I think we agree; if the API spec generated an implementation (a Proxy actually), it is much more stable, and the API endpoint can be an off-the-shelf library that adapts as protocols improve (e.g. XML -> JSON -> whatever)
  • bluGill 1761 days ago
    This is not an XOR question. Both are valid for different APIs. I start with problem and design the right solution to the problem.

    Often I have a simple problem where I can write a simple clean API quickly by hand. Generation is a negative, generated APIs tend to be complex and hard for the user to read.

    Sometimes my requirements need something that a tool does better. For example protobuf gives me an efficient over the wire API that can be used in multiple languages: I'll let protobuf generate those APIs as I can't do better by hand (though we can debate which tool is better for ages).

    Sometimes I have a complex situation where I'll write my own generator. For example I once made a unit system generator for C++: it was able to multiply light-years by seconds and convert to miles/fortnight - no way would a handwritten API support all the code needed for that but with generation it was automatic (why you would want to do the above is an exercise for the reader). The API was easier to understand than boost's unit system (APIs are about compromises so I won't claim mine is better)

  • Just1689 1761 days ago
    In a few projects where we had a specific experiment we needed insights from we ran PostGrest[1]

    Basically you create your tables and run PostGrest. Bam! You have an http interface / api for your database. We would then create light wrappers around those that took on specific responsibilities - security, audit etc. The wrapped apis is what we exposed publicly.

    This may not sound all that helpful but it made the bit we implemented unbelievably tiny. As a plus, we found that a Java application that exposes an endpoint and calls an endpoint is fast to start / stop because it doesn't mess around with DB connection pools.

    [1] http://postgrest.org/en/v5.2/

    • krueger71 1760 days ago
      I've also used PostgREST successfully. It was first meant as a tool for rapid prototyping, but it worked so well that we kept it. We ended up separating the database into a schema called api, consisting only of views of the core domain tables available in the data schema. PostgREST exposed the api-schema only. This way we could model a stable interface and vary the low-level details of the domain tables. Writes were handled by instead-of triggers on the views. So far this has been the quickest way of building an API that I know of.
    • royjacobs 1761 days ago
      How do you handle versioning? If you add a new required field to one of the tables (perhaps even a field that doesn't have a default value), how do you make sure the consumers of your old API keep working?
      • steve-chavez 1760 days ago
        You can handle versioning with PostgreSQL schemas. You can have v1, v2, etc. These usually contain views and stored procedures.
    • nprateem 1761 days ago
      That sounds like a good balance. I've looked at postgrest in the past but the thought of writing my auth logic in SQL and relying on row-level perms made me sweat too much.
  • bpizzi 1760 days ago
    At work (enterprise stuff) we've grown tired of duplicating 1000's lines of boring CRUD stuff and turned to code generation. Which is so much better.

    The workflow now is:

    - think really hard for 10 minutes about the business problem,

    - describe it into our meta language (typed structs, UML-like, really simple),

    - instantly click'n'build a whole set of API endpoints down to SQL create/alter/drop statements, along with full up to date documentation,

    - get excited to be able to deliver so much stuff to customer in no time,

    - aaand finally receive a requirement update ('the last one I promise') and send I-love-you letters back in time to our old-selves for such a nice malleable framework (which I dubbed The Platform).

  • pier25 1761 days ago
    For the last years we've done everything by hand using Node or Go.

    If I was to start a new API today I'd use Hasura. It automatically creates a GraphQL schema/API from a Postgres database. It's an amazing tool.

    https://hasura.io/

    • llamataboot 1761 days ago
      Can you do a full write-up about this tool?

      Looks really interesting to easily layer a graphQL API on top of a Rails app with a few serverless functions...

      • pier25 1761 days ago
        A "full write up" seems a bit intimidating... :)

        I'll expand a bit on my previous comment.

        So the idea is that Hasura is a stateless layer on top of Postgres that generates all the necessary GraphQL schema/queries/mutations/real time subscriptions for doing CRUD based on the Postgres schema. If you change the tables (either via the Hasura admin or some migration system) it all adapts automatically as you'd expect. It can use a remote Postgres DB, you don't need to run the API and DB in the same machine.

        Performance is fantastic. Hasura is very efficient in terms of speed and memory consumption. Even with a free Heroku dyno you should get thousands of reqs/s.

        On top of direct data from tables you can also read Postgres views. Essentially you can read a custom SQL query from GraphQL.

        Hasura can also integrate external GraphQL schemas via a mechanism it calls "stitching". The idea is that you can point remote GraphQL schemas to Hasura (on top of the current one from Postgres) and it will serve as a gateway of sorts between all your GraphQL clients and servers.

        Hasura does not include authentication, but it's very easy to integrate with your current system or with services like Auth0 via JWT.

        Hasura also includes a powerful fine grained role-based authorization system.

        Whenever anything happens you can configure Hasura to call a URL (webhook) to do something. Maybe a REST endpoint or a cloud function. This is usually the way to integrate server side logic.

        The only problem we've found is integrating Hasura with our current authorization system. Our users have multiple roles and we have no way of deciding which is the current role. Hasura requires a single role to be passed to its authorization system on the request headers. This is something that is being worked on AFAIK.

        Their youtube channel has lots of little videos showcasing all the functionality.

        https://www.youtube.com/channel/UCZo1ciR8pZvdD3Wxp9aSNhQ/vid...

    • goddtriffin 1761 days ago
      I've created a few projects using the Node/Express ecosystem and have so far loved it. I'm starting to branch out and learn Go now. Can you discuss/compare your experience working in both ecosystems?
      • pier25 1760 days ago
        We switched from a small Node Hapi monolith to Go at the end of 2017. The main reason to switch was that we wanted to get types to get more explicit code. We considered a number of options (TypeScript, .NET Core, Dart, etc) and ended up picking Go because it's a nice simple language and the performance is great over Node.

        We reduced memory usage by 80% over Node. We never had a performance bottleneck with Node either but it feels nice to be running on the smallest Heroku dyno and knowing you won't need much more for at least a couple of years.

        As for the developer experience we vastly prefer Go over JavaScript. It's more tedious at times but there is no more ambiguity. We love that we barely need any dependencies. Moving from JS to Go was extremely easy as all devs in our team are polyglots and Go is pretty simple. I don't know how easy it would be for a JS only dev, but I imagine it wouldn't be too hard.

        When using NPM/Node/JavaScript it seems there are always hidden dangers, probably more in the front end than when doing backend Node. With Go there are no surprises, everything feels solid and predictable.

        After about 2 years with Go we are still happy with the decision.

        • goddtriffin 1760 days ago
          Thanks for the detailed response! I was definitely going to ask 'why not TypeScript?' if your issue was mainly types, but you beat me to it! I reached similar conclusions regarding the benefits you witnessed switching to GO; it's nice seeing them spelled out.
  • DSotnikov 1761 days ago
    We use OpenAPI to define APIs. There is an extension with new template generation, intellisense, snippets, etc for VS Code: https://marketplace.visualstudio.com/items?itemName=42Crunch...
  • escanda 1761 days ago
    Nobody remembers anymore SOAP it seems haha It's funny but all those new documentation and code generators for Rest were largely invented in SOAP messages before.

    It doesn't make sense to send SOAP messages to browsers but I cringe every time I find myself with a vaguely documented Rest API when integrating systems.

    • hacker_123 1761 days ago
      I similarly cringe at vaguely documented APIs, but being a young developer, my experience with REST has been better. For instance, I've consumed a SOAP API where the WSDL specification was primarily a method named "Magic" that accepted a string "Method" and six string-typed parameters, "Parameter1" through "Parameter6".

      I think the key is to pick a documentation tool that the team will actually use.

  • t0astbread 1761 days ago
    Disclaimer: This is not based on real world knowledge. (To be honest I have practically no "real world knowledge".)

    That being said, I just finished a school project where we (our class) were divided into small teams and we had to implement small RESTful web apps. My team chose to kick it off by grabbing two people from the front- and backend team and writing an API specification by hand. It was a breeze and we were done in a few hours. After that front- and backend (almost) never had to interact with each other again until the end of the project where we had to stick the two things together.

    This probably isn't applicable to real-world cases where the requirements are ever-changing and everyone's a full-stack dev (or you don't have a team at all) but I found this sort of separation quite useful for this project. (It kept team sizes managable, different kinds of devs were in seperate teams, we didn't have to wrestle with any tooling that would halt the whole project.)

    I see no problem with generating client/server boilerplate from spec though (like Swagger does, I think).

    • t0astbread 1761 days ago
      This sort of philosophy could be useful when designing a public-facing API though. In that case you need a well-formed implementation-unaware API documentation and mapping it out upfront by hand could save you lots of trouble.
  • streetcat1 1761 days ago
    Use grpc. With one definition file you can generate:

    1) Client code in various langs. 2) Server code in golang of python or nodejs. 3) Swagger. 4) Rest interface if you want to. 5) Gorm definition of you use golang gorm.

  • vyshane 1761 days ago
    We've been using gRPC and Protocol Buffers for the last couple of years. We write APIs using the Protobuf interface definition language, then generate client libraries and server side interfaces. Then it's a matter of implementing the server by filling in the blanks.
    • kminehart 1761 days ago
      I love protobuf for this reason. Personally I've opted for Twirp instead of gRPC, as gRPC has a lot of baggage, and streaming is really not necessary for me.

      We've had to drop-in-replace, or add a validation or access layer service for something, and using protobuf has made this super easy. Anything interacting with that service is none the wiser.

      • vyshane 1761 days ago
        gRPC has been solid for us on the JVM, and streaming has been great when consuming from Apache Flink jobs, integrating with message queues, receiving push notifications and so on. For async work it's useful to have more than just request/response.

        I've been playing the FoundationDB Record Layer for a personal project of mine, and with this setup I can generate not only the API implementation, but also the models used by the persistence layer:

        Protobuf (Messages) -> gRPC -> Scala/Monix -> Protobuf (Models) -> FoundationDB

        • praneshp 1761 days ago
          > Protobuf (Models)

          Sounds really cool! Is this something that comes out of the box or generated by your own plugins?

          • vyshane 1761 days ago
            FoundationDB Record Layer uses protocol buffers out of the box. They leverage the fact that you can evolve protobuf messages in a sane way. That's their equivalent of doing database schema migrations.
    • stunt 1761 days ago
      We've used Apache Thrift for the same reason on some projects.
    • praneshp 1761 days ago
      Are you writing internal APIs or exposing some to external developers also? Are those external developers able to start from JSON and make a request?
      • vyshane 1761 days ago
        Both. And if your external clients rather consume a JSON/REST API, it's easy to derive that from a gRPC API. You can do it right there in your protobuf definition. It's actually easier to do it that way than to deal with OpenAPI's wall of yaml.
  • fimdomeio 1761 days ago
    (very very small team) We have some handmade scripts in place to generate basic crud endpoints, generated files are then adjusted to the specific needs, but it goes a long way in keeping things organized and consistent with very little effort.
  • cwilby 1761 days ago
    In my case I like the end product to be code. I use snippets/generators to create components (models/controllers/middleware) then modify as needed.

    Having used loopback before, it's a quick way to get an api up and running, I personally struggle with injecting logic into endpoints/writing custom endpoints.

    If the code's "all there", I know where to look. If I have to intercept hooks it adds an extra layer when searching.

    Summary, loopback has been great for creating APIs where all I care about is crud, but for larger projects I stick with snippets/generators so I can extend easier later.

  • steve_taylor 1761 days ago
    Lately, I’ve been getting back into Spring Boot. Spring Data REST automates a lot of the CRUD endpoints, with easy enough configuration and customization. I’ve been declaratively securing it all with Spring Security.
    • vbsteven 1761 days ago
      I prefer to code one level deeper and I mostly use plain Spring MVC controllers. That way I can still have spring security for the endpoints but it keeps the endpoints more decoupled from the repositories.

      I typically have a repository generated by Spring Data, a small service layer with business logic on top of those and then an MVC controller that only talks to the service layer, never the repositories.

      Each controller also has its own DTO class(es) for request bodies and responses and a small converter between DTO and entity. Kotlin extension methods make it easy to add the toDto() method onto the entity so a typical controller will fetch the entity from the service and return entity.toDto().

      Kotlin, Spring Boot and Spring Data are amazingly well suited for this.

      • victor106 1760 days ago
        Spring Framework and spring boot in particular have made enormous progress in recent times and combined with the performance of the JVM it’s one of the best ecosystems to do this in.

        Also, you could use projections in place of DTO’s.

      • steve_taylor 1760 days ago
        I was doing things manually to, even security!

        You don't really need DTOs because you can use projections and set a default projection to be used when that entity type is returned in a collection. Any entity fields that should never be exposed can be annotated with @JsonIgnore. And then if you need endpoints that aren't CRUD, you can build those the usual way.

        • vbsteven 1759 days ago
          I’ll check out the projections as they seem interesting and I don’t know them very well.
  • meddlepal 1761 days ago
    For personal projects I'll hand code them (usually) because I like thinking about API design and API UX.

    For professional stuff... it really depends. I like GRPC but codegen needs team buy-in... It can quickly make a fast development loop hurt if done poorly. Doubly so if IDEs are involved for some users and the IDE is constantly updating it's caches of types and interfaces. I've just seen it turn into a hot frustrating mess very quickly.

  • fwouts 1761 days ago
    We tried writing OpenAPI docs to implement a contract-first development workflow, with the idea that backend & frontend/mobile engineers would agree on the API interface by discussing OpenAPI changes in a pull request, and only then start implementing it (on the backend side) and using it (on the client side).

    This didn't pan out well, because it turns out OpenAPI isn't very easy to read, especially when you're reviewing a diff in a pull request. We didn't get the engagement we were looking for in pull requests.

    We've since invested in building a simpler, human-friendly API description language based on TypeScript, which exports to OpenAPI 3. It's still early, but we've got a lot of positive feedback and quick adoption across the company (50 engineers).

    You can check it out at https://github.com/airtasker/spot. Feel free to send us feedback in GitHub issues or replying to this comment :)

  • citrusx 1761 days ago
    I prefer to write them by hand. Most APIs, to start, don't have a lot to them. They tend to grow in scope over time. So, it's pretty easy to just throw together your initial idea, and incrementally grow it from there.

    I might think differently if confronted with a huge API surface area to build off the bat, but I haven't run into that yet.

  • ChrisMarshallNY 1761 days ago
    Manually. However, I have had the luxury of implementing relatively small APIs. If I was doing something like the Google APIs, I'd probably consider automation. That said, I'd probably want to write the automation, myself, as I'm an inveterate control freak.
  • avinium 1760 days ago
    For HTTP APIs, I'm a full convert to OpenAPI - write your API document by hand, then code-gen the client/server stubs.

    It requires a small investment upfront, but will pay huge dividends once your project is rolling. You have a single source of truth for publicly exposed endpoints and model descriptions (your API document), and you can instantly regenerate certain key components (e.g. model binding, new routes, etc) whenever that document changes.

    I actually contributed the F#/Giraffe generator to the OpenAPI generator project, which you can find at https://github.com/OpenAPITools/openapi-generator

  • abetlen 1760 days ago
    Yeah a couple years ago we switched from using vanilla Flask to Connexion which lets you describe your API through the an OpenAPI spec. Connexion handles routing and request validation and our developers can just import the yaml into Postman for testing as well as use Redoc for generating pretty documentation sites. Overall the biggest pain point as others have mentioned is writing and maintaining the spec. OpenApi's structure can take some time to get used to and maintaining the whole API in one file is a little tough, but it's not unmanageable with code folding and good schema definitions.
  • andreasklinger 1761 days ago
    Imho it matters less.

    What's important is that you have rigorous testing around your API.

    APIs are essentially external contracts people build against. You don't want to break this contract.

    make sure it

    - never changes unless you know about it

    - updates the documentation whenever it changes

  • mschuster91 1761 days ago
    I used Silex for a long time and when it got deprecated moved over to Symfony's MicroKernel (https://symfony.com/doc/current/configuration/micro_kernel_t...). Tiny enough to get started in a matter of minutes and when your project grows bigger then you can easily refactor either the whole project or just parts of it to "standard" Symfony architecture.
  • SergeAx 1760 days ago
    I've tried two approaches.

    1) write code, generate Swagger/OpenAPI from it. Works pretty good with big frameworks like Spring for Java or Symfony for PHP. Drawback: it is too easy to change API, tends to broke BC too often.

    2) write Swagger/OpenAPI, generate code stubs from it. Works good enough with Go and TypeScript. Tends to keep client-server contracts stable. Drawback: server code is overly complicated, needs extra layer of DTO to convert from domain terms to API models.

    edit: 2nd approach also good for autotesting.

  • bestouff 1761 days ago
    I'm using a custom protocol on top of MQTT. I have a big CSV file with all the topics/payload types/etc. specified which is then use to generate a common library for our software services. Thanks to Rust's nice code generation capabilities, I have several types (many enums) which automatically serialize/deserialize from/to MQTT messages, checks included. Really cute.
    • jph 1760 days ago
      Nice! Can you say more about how you're able to do the Rust aspects of code gen and checks?
  • znpy 1761 days ago
    It’s interesting that no one mentioned CORBA... if anyone has success/horror stories to share about corba, i’d gladly listen.
    • shmooth 1759 days ago
      blast from the past.

      anyone still using CORBA? or implementing new projects with CORBA?

  • pavelevst 1760 days ago
    I write API code manually and use testing tool that also generates openapi file and save it in git. This makes API docs always up-to-date and history of changes in actual API via git. (stack: rails, rspec and some gem for openapi)
  • scardine 1760 days ago
    I use Django REST Framework which may be or may be not an automated tool depending on the definition you are using - but DRF makes API's very declarative and I love it (batteries included).
  • graycat 1761 days ago
    Okay, I understand some APIs:

    (i) TCP/IP

    (ii) HTTP

    (iii) ASN.1

    (iv) SQL

    (v) The key-value session state store I wrote for my Web site (cheap, simple, quick, dirty version of Redis).

    Etc.

    Now, how can the design and programming of such APIs be "automated"????

  • bjacobt 1760 days ago
    I use feathers [1] and like it a lot.

    [1] https://feathersjs.com/

  • kkarakk 1761 days ago
    most languages have a library that takes a json structure from a file and creates an AP. for eg json-server on node.js, I just use that initially until the "need" for the db becomes clear ie what data do I need to interact with. After that it's custom all the way - it's more malleable I find
  • Mayeul 1760 days ago
    Yes, I use the springfox impl for Java for both server api and client api (mainly for automated tests)
  • brianzelip 1761 days ago
    What’s an example of automating API endpoints in Node.js? I always just whip up an Express.js MVC by hand.
  • llamataboot 1761 days ago
    I code them by hand, but I like to automate as much of the documentation generation as possible...
  • wheelerwj 1761 days ago
    it depends on the stage of the project i think.

    I think early stage and MVP projects are almost always written by hand.

    • clavalle 1761 days ago
      I feel the opposite: automated tools are really useful for smallish POC type things -- MVPs and early stage work, but fail when things reach a certain level of complexity.