on travis for a repository that builds 14,479 objects, 67 libraries, and 456 static executables, 284 of which are test executables which are run too. If I want to run all the test binaries on freebsd openbsd netbsd rhel7 rhel5 xnu win7 win10 too, then it takes 15 additional seconds. On a real PC, building and testing everything from scratch takes 34 seconds instead of two minutes.
I don't see how anyone can give you useful information without knowing more about the pipeline and the projects, and as everyone's pipelines/projects are going to work differently (I do web dev work, so pipelines are relatively simple, I can imagine that a game dev team creating Windows/Mac/Linux builds might have multi-hour pipelines though).
Anyway as the question is "How Long Is Your CI Process", here we go!
I have two main types of pipelines, both running on a self-hosted GitLab instance which runs on an 8th-gen i3 Intel NUC. No project is particularly massive.
1. PHP Projects. Run PHPStan + unit tests on each branch. Most projects take 1-5 mins. On master, run PHPStan + unit tests, build a Docker image, and use Helm to deploy to managed Kubernetes on DigitalOcean. This takes 5-10 mins.
2. React Projects, again not massively huge, but sizable. Biggest time is to run ESLint on every branch. About 5 mins (due to very poor caching which I keep meaning to fix). On master, run ESLint, create a Docker image, and deploy to managed Kubernetes. 5-10 mins.
There are opportunities to improve this by fixing/optimising caching. Overall I'm reasonably happy with the pipeline performance. I'm also sure that upgrading the hardware would make a big difference, probably more so than fixing the caching; an i3 isn't really ideal but this machine does well overall for my small team.
Computing is cheap these days. If somewhere has a multi-hour build, that is a noxious build system smell. Run away. Or make sure the role involves you getting paid to optimize it. If a build takes 2 hours, and you work 8 hours days, it means you get four tries in one day to get it right? Four!
That is no way to do modern computing. Things were worse back in the day (we compiled uphill, both ways), but we're not in those days any more. Buy bigger & faster servers with more RAM until the problem goes away. It won't necessarily be cheap. But if the company is too stingy, and would prefer to be pennywise and pound foolish (saving "pennies" on a server vs developer time to sit there waiting for "compiles"), you don't want to work there.
What drives me a bit nuts is when dynamically evaluated languages (which theoretically punt their compilation to lazy on-demand runtime compilation/evaluation) become monstrosities in the build cycle complete with painfully slow compilation and package resolution.
Automated tests that my team runs vary from ~seconds to multiple days, depending on what's being tested. Some of the tests involve compiling a multi-billion line repo using over 30+ languages, and doing some analysis on the resulting code graph. So that takes awhile.
30-45 minutes just for a simple test suite, even if it's PHP, Python, and Ruby - that sounds long. But without any details on exactly what's being tested, it's hard to say.
Yes, this is for the Kythe (née Grok) team at Google. We build a giant cross reference graph of the codebase. So - most teams don't have to build the whole codebase, but the Kythe team does.
Here is a somewhat outdated talk given by our tech lead: https://www.youtube.com/watch?v=VYI3ji8aSM0. That doesn't really get into any of the CI/CD stuff though, I don't know what if any of that stuff is publicly shareable.
This talk is FASCINATING!
Not every organization maintains gigantic polyglot codebases, and it's interesting to see what kinds of challenges arise when that's the operational reality. I would never have realized the need for something like Kythe, because I largely work on codebases that are written in one language.
I feel like a lot of what you all are working on might one day osmose nicely into something CI systems and cloud-based analysis tools use to decouple themselves from individual language semantics - kind of like how MapReduce, Bazel, Kubernetes, et al found brand new use cases outside of Google's organization years after they were invented and instituted.
It is indeed a very interesting and somewhat unique problem space. Our current "killer feature" is simply powering the cross references for internal google code search / IDE / etc. But we're always thinking about ways to expand.
We are sort of trying to stuff it into CI/cloud stuff, though there are some hard problems around dealing with the enormous variety of build systems out there. It works okay in bazel/blaze, it can work for gradle or maven with a lot of work, and a few other systems. But, the amount of custom build tooling that projects use make it difficult to wire up the entrypoint for Kythe in a generic way.
What matters is the development process - local build & test should be fast.
Otherwise, with CI/CD, it's a continually-moving release train where changes get pushed, built, tested, and deployed non-stop and automatically without human intervention. Once you remove humans from the process, and you have guard rails (quality) built into the process, it doesn't matter if your release process for a single change takes 1min, 1hour, or 1day.
Even if it takes 1 day to release commit A, that's OK b/c 10min later commit B has been released (because it was pushed 10min after commit A).
I've seen pipelines that take 2 weeks to complete because they are deploying to regions all over the world - the first region deploys within an hour, and the next 2weeks are spent serially (and automatically) rolling out to the remaining regions at a measured pace.
If any deployment fails (either directly, or indirectly as measured by metrics) then it's rolled back and the pipeline is stopped until the issue is fixed.
 Yes, even for fixing production issues. You should have a fast rollback process for fixing bad pushes and not rely on pushing new patches.
CI is fundamentally about feedback loops. The timing of the feedback is second only to the reliability of the feedback. Unfortunately a lot of people don’t achieve either. The worst use the consequences as a way to complain about CI.
Yes, if you don’t know what something is for, you’re not going to enjoy using it.
For too many people don’t get this. I’ve been pulled into projects that had a terrible pipeline and devs that were pushing commits and wasting so much time waiting to get feedback about very simple problems. This kills rapid development.
It took some redesign, but I was finally able to demonstrate how much could be done locally before pushes. Some just need their eyes opened up.
The system I described is actually a cloud system, and we had both stubs and mocks of all our dependences (which is easy, because they were other cloud systems and we could easily stand up a fake service with the same API when doing integration tests, or switch to use local data when doing unit tests).
We also performed testing against live dependencies but with test accounts to ensure that our stubs/mocks were accurate and up-to-date, and captured realistic interactions (and failures).
I've done the same with hardware systems, again using stubs/mocks of HW dependencies for unit tests and then using actual HW for integration testing.
The time spent investing in stubs/mock quickly pays dividends in both increased development speed and test coverage, especially as you can inject faults and failures (bad data, timeouts, auth failures, corruption, etc).
Typescript takes less than a minute to build, and basic PR validations (the simple regression, conformance, and unit test suites) add around 10 minutes of test running to that (to be fair, I can run those locally in just under two minutes, we just use slow CI boxes, and local incremental build and test can bring that loop down even more). The extended test suites that run on a rolling basis on `master` and on-demand on PRs can be much longer and take up to two hours to run (the longest extra suite being the DefinitelyTyped suite, where the CI system runs all of DefinitelyTyped's tests on both nightly and your PR/master and reports any changes). Technically, there is also a github crawler running periodically that rebuilds anything public and open source it finds with the latest TS and reports new crashes, and that's _constantly_ running, so I can't really say that has a fixed run time, per sey. Turns out the closer you get to building the world with your (build tool) project, the longer it takes, but the more realistic your coverage becomes.
A lot of the drag on CI for complex projects is tests, which are hard to argue against. Complexity : need for tests isn't linear -- once you hit some critical mass of complexity where one person can't know the whole application, the need for tests skyrockets.
I joined a company last year that's trying to solve this  by tracing tests so it can skip any whose dependencies (functions, environment variables, etc.) haven't changed. It's amazing what "what if we don't run tests we know will pass?" can do to a CI pipeline.
A similar pipeline with comparable tools for Rails takes ~4-5 minutes and Phoenix takes ~4-5 minutes too. You can replace "flask" with "rails" and "phoenix" in the above URL to see those example apps too, complete with GH Action logs and CI scripts. These mainly take longer due to the build process for installing package dependencies, plus Phoenix has a compile phase too.
We have a CI pipeline for a cross-platform Rust library, and it currently takes an hour across C, Android, iOS, Java, WASM, etc. and different combinations of cryptographic libraries. This is probably something we’ll tune over this or next quarter, such as by throwing some beefy hardware at it and parallelizing. We also seem to be hitting some GitHub actions limits in terms of storage.
The only times when it was long enough that it was painful it was because there was a stage that couldn't be debugged without running the build. That's invariably what I actually preferred to fix, not the total lead time.
A 45 minute sanity check to verify nothing is fucked before releasing is fine. A 45 minute debugging feedback loop is a nightmare.
Faster CI builds are typically a nice-to-have rather than a critical improvement (& doing too many nice to haves has killed many a project).
We have a CI pipeline for a containerized python app and a react app. We have a monorepo and only trigger certain jobs depending on code changes. Our CI runs through Gitlab CI on a GKE cluster, which gives us a lot of control over the parallelism and the resources allocated.
Our pipeline typically takes 10-30 minutes, depending on what jobs run and where cache gets used.
The longest job, at a consistent 12 minutes is our backend test job. There’s not a lot we can do to speed this up any further because a lot of the tests run agains a test db so we can’t easily run them in parallel. Perhaps if we wanted to be really clever we could use multiple test dbs.
The build for our containers is usually very quick (a few minutes) unless we modify our package requirements.txt. That happens infrequently but it triggers an install step that will increase the overall time for the job to 10-12 minutes.
The deploy phase is very quick.
We spent a bit of time optimizing this and it came down mostly to:
1. Using cache where we can.
2. Ensuring we had enough resources allocated so that jobs were not waiting or getting slowed down by lack of available cpu.
2. Making sure that each command we run is executing optimally for performance. Some commands have flags that can speed things up, or there alternate utilities that do the same thing faster. One example of the latter is that we were using pytype as our type checker, but it often took about 15 minutes to run. We swapped it out for pyright, which takes under 5.
Depends on the complexity of what you are doing. For web stuff with unit tests I’ve seen it run in a few minutes.
Our current CI takes an hour because it has to build quite a complex app on iOS and Android, this happens in parallel but the Azure build nodes we use are pretty slow. Ideally it would be faster but it’s not too huge an issue in practice, we have the lint/unit tests etc. run first so the build will fail early for any glaring errors.
In my experience, the biggest wins in CI speed improvements come from parallelization. You can parallelize by either running multiple processes/containers or by running tests in parallel on the same container (jest, parallel_tests, etc)
About 5-10 minutes from push to deploy. Python/Django monorepo. 1-2 devs for several years kinda project size.
Build and test steps take about equal time. We build from a common docker image which has most of the time consuming work already done.
It can take longer if the Python deps have changed and therefore the ‘poetry install’ step cannot be pulled from the cache.
Also, we deploy multiple individual Django projects, rather than one huge monolithic project. That probably gives some speed up. It means that changes to common code can trigger 5-15 pipelines, but they all take a similar amount of time.
30-45 minutes seems like a really long time to me. Maybe you have a lot of slow tests, but I’d also looking at the build process too. If you’re doing docker images you may find you can extract a lot of the time consuming work to a common base image. You can also get plugins that help docker pull already-built layers from a cache.
If it is the tests then you could always try running tests in parallel. One worker per CPU or some such.
FWIW - I find that these long feedback loops can really kill productivity and morale. 10 mins for a deploy is about my limit.
This is such an open question it is hard to answer. You have to know what runs in the CI as well as size of the project, languages, number of projects, quality steps executed in build, etc. Anyway, to give it a shot:
* Tests running in build: nunit, msvstestv2, jest, karma
* number of tests running in build > 5000
* package managers used: Nuget, npm
* number of packages (private and public) > 500
Still a lot I forgot now.
It all runs in approximately 45 mins for stage1 builds, stage2-4 run nightly and weekly and take much longer (>2 hours to >24 hours for long duration stage 4). Increasing stages run longer test suites, up to approx 50k or so for stage 3 and 4, more quality checks, etc.
P.s. We spend countless hours reducing our build times. In addition we have setups to split build pipelines for those who do not need the entire archive build for their dev purposes etc. Yet, CI server aways runs single-core and cold builds.
My current job is a Node+React app that takes 8 minutes to go through CI. It feels subjectively a LOT better than my last job, which was Go+Node+React and took about 15 minutes, BUT...
The slowest part of the previous CI process was our integration tests on Selenium. And the new stack doesn’t have any of those (it just does unit tests in Karma).
And frankly, I think I’d take the 15 minutes with the extra security of knowing the whole stack is functioning together, over the speedup to my dev cycle.
But I feel a bit crazy saying that. In the end, the site doesn’t seem to go down due to the lack of integration tests. Maybe because we complement with manual testing. I never deploy without opening up the site in a browser anymore.
CICD systems deliver value to audiences. CI is mostly for the developer team, so you can check your changes don't break other's work, or vice versa. Often there's a CD to an internal system, so QA can take a look to see the new feature works according to the business expectations, and the business can play with it.
None of the above really matters, the important bit is that USERS actually see the work! Everything else is necessary, of course, but doesn't create value in itself.
So, the question is, how does each system create VALUE for its audience, and what's the latency (LAG)? CI is often for 4-10 developers, and takes ~10-20 minutes for smallish web shops. The value the business gets is that devs can check they didn't forget to "git add" a file :)
Devs and the business always complain about the slowness of CICD, but rarely invest the modest effort to make it faster. Here are some ways to improve the development cycle:
Speed up databases. Move from "install database and sample data interactively every time" to having a pre-baked Docker image with the database and seed data. Much faster: you get lower LAG and the same VALUE for the team.
Run fewer tests. Running tests creates business value -- confidence a deployment will give features to users -- but takes time (LAG). However, for 90% of the cases Devs get value by running a subset of the tests. Thus, much faster: less LAG, same VALUE. Run all the tests before a real deploy, or run the full suite nightly. Devs get the value of a full test without having to wait for it.
Simplify. CICD should just run things Devs can run locally. That is, Devs can run fast local test subsets to get rapid feedback (low LAG), and get focused VALUE. When CICD tests fail, it's very easy for Devs to figure out what went wrong, because CICD and local environments are nearly identical.
CICD creates a lot of value for several audiences. Plot out each one, and see what you, the business, want to improve upon!
It's going to depend on the size of your code base and tests but when I was at AWS by the time we built and ran our test suite it was close to that time as well (30-45 min).
It's really interesting how many companies these days have a primary pricing model of build minutes.
If you are looking for a DIY solution for your CI, check out https://tinystacks.com. We have the fastest way to launch and operate your Docker app on AWS. In one click, we setup infra and an automated pipeline on your AWS. Uses ECS with Fargate. All setup for you with a control center for logs, env vars and scaling. No config nightmare.
Email me safeer at tinystacks.com and I can get you onboarded.
It’s really down to goals... you should have something that triggers based on open pull requests and does a sanity check and ideally deployed to a test environment... that should be “quick” to give feedback in addition to reviews...
Then in the main build that hopefully is deploying to a qa environment that can do more testing, bundle artifacts for whatever dependencies need them, all that kind of stuff...
That’s how ours is set up... we use Jenkins with parallel parts where possible (like build the ui while tests that hit the db are run) it’s a process that takes time to get right and time to optimize...
We’re at about 5 mins for the quick part and 8 or so for the slower part
Both of those will probably get worse as we are planning to include full ui testing on the deployed environment too
Several hours, which is far too long. It's a massive waste of developer time and money, but try explaining to diverse contributors ("features features features!") that some investment in the testing setup would save money in the long run...
Oh yeah; this was the best part of moving to Erlang/OTP... Test suites are absurdly fast (nearly) all the time. Most test suites I use take less than 10 seconds with anywhere from 100-1000 unit tests. The worst I've seen "monoliths" take at most ~90 seconds to run and that is only ever the case if folks are creating insanely too many objects or needlessly testing the internals of `gen_server`.
Large ruby on rails application. Entire test suite takes around 40 minutes to run, however we use circleCI and parallelize the build so realtime is around 10 minutes.
What makes the build so slow is that the database is involved, if you want fast builds decouple your unit tests from the database. With rails including the database access in tests makes everything easier and you get closer to real-life execution but slow ...
We've multiple PHP repositories but the longest one currently takes around 7 minutes wall time (that includes: tests, static analysis, code style and a few other misc smaller things). Scaling the phpunit tests is actually easy in terms of throwing money at it, as the suite of +15k tests can be diced and sliced to run segments in parallel (a bit of scripting + github action matrix). Billing time is around 40 minutes I'd say.
The frontend/TS stuff takes longer, usually 10-11 minute, where it's "truly building" and we can hardly parallelize this one. Or we lack the expertise to fix it probably.
At the moment though this is non-container environment; once we add building/deploying into the mix, I'd assume the time will go up a bit.
CI generally is a topic where there is usually lots of potential for optimization, but many things are not easily done with common tooling. Some large companies put serious R&D effort into improving CI with custom tooling.
What is applicable to the specific project depends, same as to what is worth which effort. To a degree, of course throwing more resources at the problem helps - faster build workers, parallelized tests, ... but isn't always easily implemented on a chosen platform and costs money of course.
In projects I worked on, it varied greatly. From just a few minutes to cases where the full process took 6 hours (which then was only done as a nightly job, and individual merge requests only ran a subset of steps). I really would want <15 mins as the normal case, but it's often difficult to get the ability to do so.
We had the same kind of problem, where we saw our tests and builds take 20-30 minutes.
We also noticed that our own machine could run the tests significantly faster, mainly due to that desktop CPU can easily boost the clock speed for intense work load. Comparably most CIs use Cloud VMs which hardly go beyond 3 GHz. We found this quite strange.
After some talk we decided to build a CI service based on this premise, i.e desktop CPU outform Cloud CPUs for the CI use case. After some months we managed to create BuildJet.
I would say it at minimum cuts the the build time in half and the best part is that it plugs right into Github Actions, just need to change one line in your Github Actions configuration.
30 minutes? I don't have exact numbers, but I know ours is under 10 minutes and IMO that is not optimized at all. Maybe 30 minutes is okay for a very large project, but for most applications that seems quite high to me.
Our pipeline runs in Jenkins and builds a docker image that runs composer installs, application copy, and that sort of thing. We also run phpunit, phpstan, phpmd, and phpcs in our pipeline. Finally the image gets pushed up to ECR.
I think that's all pretty standard stuff. TBH I'd like us to move to github actions and optimize for more staged builds in our docker images, but we have higher priorities at the moment.
The D compiler takes about 25 minutes, GCC + D frontend tests takes about an hour.
There is absolutely a huge amount of room for performance in areas like this. With Python especially it's very common so think "Ah yes but numpy", when it comes to performance, and that is true in steady state where you are just number crunching, but there is mindnumbingly large amounts of performance left on the table vs. even a debug build with a compiler. Testing in particular is lots of new code running for a short amount of time, so it's slow when interpreted.
2 minutes on GitHub actions from commit -> yarn install (90% we download from cache) -> webpack build with esbuild-loader -> netlify draft deploy for instant staging link -> smoke tests in parallel that hit the netlify url with real chrome browser (use browser less for that). Eslint, prettier, unit tests and tsc typecheck run in parallel.
Basically cache + parallelize.
Once PR passes, merge to master deploys in a minute. If something is wrong we can revert within a minute.
It’s joyful to build things when your tools are fast and reliable.
Usually it depends on what you need to run. Running some tests on an interpreted language should be done in a few minutes. With a compiled language it takes longer, maybe half an hour. If you have a compiled language that's slow to compile with additional checks, because the language is more footgun than anything (yes, C++) then you might want a standard build, build with various sanitizers, and do some static analysis, and you end up with hours of time spent on building and analysis.
It really depends on what you're trying to achieve and how big the project is.
Roughly 30-40 minutes before tests, and another 30 minutes of tests. I work in games, and compiling the game on one platform is ~5-10 minutes even for an incremental compile on "compute optimized" instances on azure and aws (compared to ~30s on my workstation). It takes 20 minutes (per platform) to generate the runtime texture/audio files, and ~10 minutes to upload them to a shared drive. We do 4 platforms right now, my last project was ~10 bigger and did 10 platforms to boot.
Similar scope to yours but Java and Gitlab CI. The CI to dev takes about 15 minutes or so. A shared lib is first built and tested, then several applications are built and tested in parallel. After everything is built, things are deployed serially. About 3 of those 15 minutes are startup times for runners (no clue what we use, but it's super slow), another 2-3 for deployment, about 1:30 for compiling and the rest is E2E tests. The whole thing takes about 4 minutes on a ~2015 era Mac.
Most of our builds (.net Core with a react front-ends) take around five minutes from push to having a release ready. Haven't really needed to optimise them at all. Roughly a minute for each of npm build, dotnet build, test and publish.
Deployment takes a little under a minute in total.
Worst one was probably a big Sharepoint application at one client's site. But that still only took about 12 minutes in total.
At my current job, the full build/test/release cycle is about 45 minutes. There is an effort to begin optimizing it but it is a high risk endeavor that only became worth it once the costs started growing faster than our team size and became unsustainable.
CI tech debt is very difficult to pay down, and imho not worth it unless the dollar costs are becoming excessive and you have a dedicated release or DevOps engineer who can own it as an internal product.
I've setup multiple CI systems and it really depends on what you need to test. A long time ago I built a CI for a team that ran end to end integration tests and collected code coverage from each service running. This took between 4 minutes and 10 minutes to run for our 3 to 6 services. Another job I worked at I setup a git repo + CI for a team of about 15ish people. In the beginning we had no CI, then I containerized everything and the CI took at long time (~20 minutes). Then, I switched to a build/test system that was more in tune with what we needed and I ultimately (through some hacks) got the entire CI time for ~20ish microservices down to <1 minute since I was caching everything that wasn't changed with Bazel. After that I added in a stage where I collected code coverage from all 20 of those services which was much slower since Bazel had a hard time understanding how to cache that for some reason. This brought it back up to 4ish minutes.
The main blockers I've seen to CI performance is:
1. Caching: Most build systems are intended to run on a developers laptop and do not cache things correctly. Because of this most CIs completely chuck all of your state out of the window. The only CI that I've found that lets you work around this is Gitlab CI (this is my secret for getting <1 min build/test CI pipeline)
2. What you do in CI: If you want to run end-to-end integration tests, it's going to be slow. Any time you're accessing a disk, accessing the network, anything that doesn't touch memory, it's slow. Make sure your unit tests are written to use Mocks/Fakes/Stubs instead of real implementations of DBs like sqlite or postgres or something.
3. The usage pattern: If you don't have developers utilizing your CI machines 100% of the time you are "wasting" those resources. People will often say "lets autoscale these nodes" and, when you do, you'll notice they scale down to 1 node when everyone is asleep, everyone starts work and pushes code, then the CI grinds to a halt. You can make a very inefficient CI just by having the correct number of runners available at the correct time.
Another thing to consider: anything you can make asynchronous doesn't need to be fast. If you setup a bot to automatically rebase and merge your code after code review then you don't really need to think about how fast the CI is.
About 10 minutes. 1-2 minutes to refresh a docker container, ~2 minutes to build a mostly C codebase in the docker container, ~5 minutes to build a bunch of python environments and run unit tests in them. 3 or so minutes due to bad design choices in Gitlab-CI. Project has around 100k loc.
I think we could get it down to 3 minutes or so if we changed some things, but 10 minutes vs 3 minutes doesn't really change the workflow for us.
Deoends on what parts changed. 2 hours of tests for simple changes, 8 hours for the complex changed everything stuff. We have broken up the system so the first is far more common.
Note that half of the tests on the fast build are regression that can't possibly fail based on my changes... we run them anyway because about once a month something has a completely unexpected interaction and so a test fails that the developer didn't think to test.
That seems reasonable to me if you require heavy integration test coverage. I work on several applications. The ones that are message driven and don't require a database have test suites that run in a couple minutes. The one that has a large database component takes about 30 minutes to run the tests. This is because we actually run the tests against a real database which requires migrations, data loading, etc.
From 2 minutes to 10 minutes. We have mostly Go microservices so building is fast.
The pipeline is: build, unit test, lint in parallel, then package and save the relevant artifacts, then build a Docker image, then run the integration tests, and finally deploy (staging, dev or prod depending on the branch).
We also have end to end tests that run periodically and are a bit longer, but they're not on the path to prod.
Depending on the repo, between 3-4 minutes and 3 hours. Fast is some quick checks that the repo builds, slow is an FPGA synthesis and place/route. None of them are particularly large in term of LOC. Probably the slowest part of the process on the non-FPGA builds is installing python packages.
Think of build and test times as being determined by what people are willing to put up with. If people only start getting annoyed at the runtime once it's past 45 minutes, it'll probably take about 45 minutes. People will keep adding things that slow it down, such as new dependencies.
For the kinds of projects you mention (scripting language, small-medium sized) I aim for 1-2 minutes max, which is usually not a problem. This precludes running a lot of integration tests requiring expensive setup/teardown, though the need or value of those greatly depends on the project.
Does anyone here have any tips/tricks when it comes to iOS builds?
Currently experimenting with Travis-CI, but man it sure does take awhile at 45-60 minutes roughly in my personal case. Have heard a dedicated Mac of some kind to leave at the office may help. Overall, I am all ears to any advice.
28-34 minutes. Massive highly-tested rails application, running on circleci. Only about 12 minutes of that is actually running tests, we parallelize them across roughly 60 containers, worked on by a team of ~100 engineers.
The truth is that most slow pipelines "could" be optimized to run wildly faster, but that it is costly to do so. You may be able to find low-hanging fruit that affect the build-time significantly, but most of the optimizations to be done are very large projects, like updating thousands of tests to be isolated from the database.
You get what you pay for? Cloud CI bottom-feeds unused downmarket capacity
like a magazine-stand calling card. And Cloud CI is still "Cloud",
i.e., a mass-market, one-size-fits-all solution like the Department of Motor Vehicles.