That the entire ecosystem seems to have moved to GitHub Actions is such a loss for productivity. I remember when CircleCI first launched, and you could "Rebuild with SSH" which gave you a bash command to connect to the running instance whenever you wanted, was such a no-brainer, and I'm sure why many of us ended up using CircleCI for years. Eventually CircleCI became too expensive, but I still thought that if other services learnt anything from CircleCI, it would be this single feature, because of the amount of hours it saved thousands of developers.
Lo and behold, when GitHub Actions first launched, that feature was nowhere to be seen, and I knew from that moment on that betting on GitHub Actions would be a mistake, if they didn't launch with such a table-stakes feature. Seems still Microsoft didn't get their thumb out, and wasting countless developer's time with this, sad state of affairs.
Thank you pbiggar for the time we got with CircleCI :) Here's to hoping we'll have CircleCI.V2 appearing at some point in the future, I just know it involves DAGs and "Rebuild with SSH" somehow :)
I am surprised Docker didn't launch into the CI market. Running a container build as CI seems like it would both be a boon for simplifying CI caching and also debugging since it's ~reproducible locally.
They _are_ in the CI market. Two of their products are the Docker Build Cloud and Testcontainers Cloud. IIRC Docker Hub also came with automated builds at some point (not sure if it still does).
I do get your sentiment tough. For the position they are in, a CircleCI-like product would seem to be quite fitting.
Wow you're right they are. Yeah, they could really use some improvement there.
https://docs.docker.com/build-cloud/ci/
This could've been a "change runs-on to be this" like all the other faster GHA startup products, but instead the way they set it up I would have to keep paying for GHA while also paying for their build cloud. No fun!
I've gotten used to this essential feature too via Semaphore CI, and I just can't stand not being able to SSH into a GitHub Action. Debugging is so slow.
I've seen people spend something like 2 hours fixing something that can be fixed in minutes if you had a normal feedback cycle instead of the 5 minute "change > commit > push > wait > see results" feedback cycle GitHub Action forces people into. It's baffling until you realize Microsoft charges per usage, so why fix it? I guess the baffling part is how developers put up with it anyways.
Does not sound like a GitHub failure, sounds it is the company's failure. They haven't invested in the developer experience and they have developers who cannot run stuff locally and are having to push to CI in order to get feedback.
You can't run a GitHub CI pipeline locally (in general; there are some projects to try but they're limited). Even if you make as much of it runnable locally as possible (which you should) you're inevitably going to end up debugging some stuff by making commits and pushing them. Release automation. Test reporting. Artifact upload. Pipeline triggers. Permissions.
Count yourself lucky you've never had to deal with any of that!
Yes there are a few things you can't do locally. But the vast majority of complaints I see 90%+ are for builds/tests etc that should have the same local feedback loops. CI shouldn't be anything special, it should be a 'shell as a service' with some privileged credentials for pushing artefacts.
> Release automation. Test reporting. Artifact upload.
Those I can actually all do locally for my open source projects on GitHub, if I the correct credentials in my env. It is all automated(which I developed/tested locally) but I can break glass if needed.
> Those I can actually all do locally for my open source projects on GitHub
Maybe I wasn't clear enough in my description, but you definitely can't locally do things like automatically creating a release in a Github workflow, sending test results as a comment to PRs automatically and uploading CI pipeline artifacts locally. Those all intrinsically require running in Github CI.
I agree there is stuff you can't test locally, but in my experience people most of the time are complaining about stuff they should have local feedback loops for such as compiling, testing, end to end testing etc.
You give some good examples and I agree they is CI specific stuff that can only be really tested on CI, but it a subset of what I generally see people complaining about.
> can't locally do things like automatically creating a release in a Github workflow, sending test results as a comment to PRs automatically and uploading CI pipeline artifacts locally.
> uploading CI pipeline artifacts locally
I actually testing this locally before opening up a pull request to add it. I just have my workflow call out to a make target, so I can do the same locally if I have the right credentials using the same make target.
E.g. this workflow trigger on a release.
```yaml name: Continuous Delivery (CD)
on: release: types: [published]
# https://docs.github.com/en/actions/using-jobs/assigning-perm... permissions: contents: write packages: write
jobs: publish-binary: name: Publish Binary runs-on: ${{ matrix.architecture }} strategy: matrix: architecture: [ubuntu-24.04, ubuntu-24.04-arm] steps: - name: Checkout code. uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 - name: Setup Nix. uses: cachix/install-nix-action@4e002c8ec80594ecd40e759629461e26c8abed15 # v31.9.0 - name: Publish binary. run: nix develop -c make publish-binary RELEASE="${GITHUB_REF_NAME}" env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} # This token is provided by GitHub Actions. ```
Which after building the binary calls this script
```bash #!/usr/bin/env sh
set -o errexit set -o xtrace
if [ "$#" -ne 2 ]; then echo "Usage: $0 RELEASE_TAG TARGET" echo "$#" exit 1 fi
RELEASE="$1" TARGET="$2"
tar -czvf "${TARGET}.tar.gz" -C "target/${TARGET}/release" "clean_git_history" gh release upload "${RELEASE}" "${TARGET}.tar.gz" rm "${TARGET}.tar.gz" ```
So I was able to test large parts of this locally first via `make publish-binary RELEASE="test-release"`.
Can't do much about that when there is something you're troubleshooting about the CI platform itself. Say you're troubleshooting why the deployment doesn't work, somehow got the environment variable wrong for whatever reason. So you edit and add a "env | sort" before that, commit it, push it, so on. With "rebuilt with ssh", you literally are inside the "job" as it runs.
Yes you can't really debug CI specific stuff locally, like if your setting up build caching or something. But it seems like 90%+ of the time people are complaining about builds/tests that should have local feedback loops.
Yeah, fair point, I see that a lot in the wild too. I guess I kind of assumed we all here had internalized the practice of isolating everything into one command that runs remotely, like "make test" or whatever, rather than what some people do and put entire shellscripts-but-yaml in their pipeline configs.
Yeah everytime I see logic in YAML I cringe. Trying at work to get people to use a task runner or even call out to scripts was a fight...
Still using CircleCI. I do not love YAML at all, in fact I hate it because it's basically a 1980s text preprocessor on steroids and with dependency management. Too much logic applied to config that depends on implicit syntax and unintuitive significant whitespace.
I mean, I had an issue once where this broke the pipeline:
But this was fine:key: - value 1 - value 2
Fuck that noise!key: - value 1 - value 2Otherwise it works just as good as it ever did and I don't miss Github Actions where every pipeline step is packaged into a dependency. I think Github has stagnated harder than CircleCI.
> I mean, I had an issue once where this broke the pipeline:
It seems fair to dislike YAML (I dislike it too), but I don't understand how this broke for you unless CircleCI (or whoever) isn't actually using a legal YAML parser.
(This works for any number of leading spaces, so long as the spacing is consistent.)irb(main):009:0> YAML.load <<EOD irb(main):010:0" key: irb(main):011:0" - value 1 irb(main):012:0" - value 2 irb(main):013:0" EOD => {"key"=>["value 1", "value 2"]} irb(main):014:0> YAML.load <<EOD irb(main):015:0" key: irb(main):016:0" - value 1 irb(main):017:0" - value 2 irb(main):018:0" EOD => {"key"=>["value 1", "value 2"]}There shouldn't be any difference between those two values. I'm not saying you are wrong and it didn't break but it's definitely surprising a parser would choke on that vs YAML itself being the problem.
Don't get me wrong I can empathise with whitespace formatting being annoying and having both forms be valid just adds confusion it's just surprising to see this was the problem.