It's a pair of really simple Python scripts: cirunn.py for executing locally, an ciorder.py to execute stuff via ssh on another host (useful for projects that are too big for the laptop).
The main advantage is that podman does not need root access at all, making it both easier to use and more secure.
Also, no installation.
The difficulty I found here is deciding what the user actually means to run: is it more like `make test`, where the current state of the working directory is tested, or more like the CI pipeline, where only committed changes are tested?
For the sake of practicality, the local script tests uncommitted changes, and the remote one checks the last commit.
IMO you can do this without needing extra tooling if you keep your CI scripts as shell scripts and your application is running in Docker.
If you keep your CI scripts as shell scripts and put them into a runnable file included in your project you can run your CI scripts locally so you can test the work flow on your machine. It's also handy in case CI is down you can still test and deploy your code. Lastly it lets you easily jump between CI providers since a majority of your logic is in a generic shell script.
For example in my CI specific files I typically only call `./run ci:install-deps && ./run ci:test`. Then there's a tiny bit of boiler plate in each CI provider's yml file to handle specific things for that provider.
If your app is running in Docker most of the heavy duty dependencies are all contained there. The dependencies I install directly in CI end up being things like installing shellcheck and helper scripts for CI (like tiny scripts that let you wait until a process is ready). Having WSL 2 is nice here because these are very small tools that I want locally installed anyways but even if you use macOS you could install these dependencies using brew instead.
It's worked in virtually every CI system, and locally. Bonus point, it really points out the amount of ceremony some CI platforms require to map simple commands into their pipelines. Looking at you, CircleCI.
I found an approach like this was especially nice on teamcity in conjunction with its template feature.
Individual apps were still able to have whatever specific thing they needed, but you could globally add a step, eg a security scanning tool or slack notification by modifying the template only rather than 100s of build configuration files as the GitHub / circleci approach seems to encourage.
Adding a new pipeline basically just meant naming it and selecting the VCS URL
I also like this approach, although I am using Nix instead of Docker for a slightly more lightweight way of managing dependencies of different stages.
If you also reduce the assumptions made by your CI scripts, as in the scripts themselves will authenticate to and set up any needed connectivity / port forwarding, or fetch secrets instead of being passed secrets from the CI system, then you end up with some really nice portable scripts that can be run from anywhere.
The only things that my CI scripts depend on is an authentication token to Hashicorp Vault, an internet connection and Nix to be installed on the machine.
Heh, yeah, besides being a different programming language, how is node different from Java? :-D
I, too, would value "the one CI yaml to rule them all" but given how much faster GitLab is moving than GitHub, there's almost no prayer of them trying to have some kind of common behavior.
I actually had the best experience using circleci's local runner (back when we were using it for CI) as far as "run binary, perform build" goes. I have yet to see gitlab-runner successfully execute a local build :-(
Earthly lets you abstract anything you do inside a container into an Earthfile, that runs locally on a dev's machine, but also in any CI, making CI scripts more portable. However, testing other CI-specific things before committing, such as GitLab rules dictating when jobs actually run, remain unsolved. But they also remain unsolved with this glci solution, right?
Not personal, maybe your unfamiliarity with Docker means you're not the target audience. I mean getting people up to speed with Docker is a bit of an ocean to boil, and this project appears to be small of resources...
Personally with Github Actions, I do find myself pushing 2, 3 or 4 times just getting the pipeline working because I always mess something up, either I've messed up a command or the environment isn't what I expected etc. There's act for github which I should use more often to solve this, but I usually forget
You continuously need to push your changes to change `.gitlab-ci.yml` and find out if your change is working? E.g. your docker run command. I haven't tried out the tool yet but I hope it will save time and avoid me pushing 100s of commits to sort out my pipeline changes.
From what I understand, you use it as a test for your pipeline scripts. With code we run unittest locally to make sure our commits are okay. When develop ci scripts, many times you only find out things don't work until you push the ci scripts to GitLab, it leaves you with many try and fix commits in history.