I've been messing around with GitLab as a self hosted alternative for a few years. I do like it, but it is resource intensive!
For the past few days I've been playing with Forgejo (from the Codeberg people). It is fantastic.
The biggest difference is memory usage. GitLab is Ruby on Rails and over a dozen services (gitlab itself, then nginx, postgrest, prometheus, etc). Forgejo is written in go and is a single binary.
I have been running GitLab for several years (for my own personal use only!) and it regularly slowly starts to use up the entirety of the RAM on a 16GB VM. I have only been playing with Forgejo for a few days, but I am using only 300MB of the 8 GB of RAM I allocated, and that machine is running both the server and a runner (it is idle but...).
I'm really excited about Forgejo and dumping GitLab. The biggest difference I can see if that Forgejo does not have GraphQL support, but the REST API seems, at first glance, to be fine.
EDIT: I don't really understand the difference between gitea and forgejo. Can anyone explain? I see lots of directories inside the forgejo volume when I run using podman that clearly indicate they are the same under the hood in many ways.
EDIT 2: Looks like forgejo is a soft fork in 2022 when there were some weird things that happened to governance of the gitea project: https://forgejo.org/compare-to-gitea/#why-was-forgejo-create...
> I'm really excited about Forgejo
Our product studio with currently around 50 users who need daily git access moved to a self hosted forgejo nearly 2 years ago.
I really can’t overstate the positive effects of this transition. Forgejo is a really straightforward Go service with very manageable mental model for storage and config. It’s been easy and cheap to host and maintain, our team has contributed multiple bugfixes and improvements and we’ve built a lot of internal tooling around forgejo which otherwise would’ve required a much more elaborate (and slow) integration with GitHub.
Our main instance is hosted on premise, so even in the extremely rare event of our internet connection going offline, our development and CI workflows remain unaffected (Forgejo is also a registry/store for most package managers so we also cache our dependencies and docker images).
Wait, forgejo offers a built-in container registry? How does that work? I don't see that in the admin section at all.
Container registry and a lot more, they call it Package registry in the docs https://forgejo.org/docs/latest/user/packages/
Is it not the same as in Gitea? https://docs.gitea.com/usage/packages
edit: Ok, this answers my question: https://forgejo.org/compare-to-gitea/#is-there-a-list-of-fea...
- [deleted]
Just run podman or docker login your.forgejo.instance.address then push to it as normal. An existing repo must exist. You can check the images under site administration -> packages.
Speaking of authentication it also works as an openid provider meaning you can authenticate every other web software that supports it to Forgejo... which in turn can look for users in other sources.
It also has wikis.
Its an underrated piece of software that uses a ridiculous small amount of computer resources.
That's so brilliant. Wow. I'm struggling to wrap my brain around how they not only support OCI (docker) but also APK (alpine) and APT (debian) packages. That's a very cool feature.
"Forgejo is also a registry/store for most package managers"
Do you know if it supports OpenWRT packages?
Since they support Alpine, and the recent switch of OpenWRT to the wonderful alpine apk package manager, I guess it is supported.
Ease of maintenance is an even bigger difference. We've been using gitea for a bit over five years now, and gitlab for a few years before that, and gitea requires no maintenance in comparison. Upgrades come down to pulling the new version and restarting the daemon, and take just a few seconds. It's definitely the best solution for self-hosters who want to spend as little time as possible on their infrastructure.
Backups are handled by zfs snapshots (like every other server).
We've also had at least 10× lower downtime compared to github over the same period of time, and whatever downtime we had was planned and always in the middle of the night. Always funny reading claims here that github has much better uptime than anything self-hosted from people who don't know any better. I usually don't even bother responding anymore.
I guess I'll just chime in that while Gitlab is a very heavy beast, I have self hosted it for over a decade with little to no issues. It's pretty much as simple as installing their Omnibus package repository and doing apt install gitlab-ce.
When I self hosted gitlab I never found the maintenance to be that bad, just change a version in a compose.yml, sometimes having to jump between blessed versions if I've missed a few back to back.
Like others, I've switch to Gitea, but whenever I do visit gitlab I can't help but think the design / UX is so much nicer.
My usual impression of GitLab is that it has too many functions I don't ever use, so the things I actually do want (code, issues, PRs, user permissions) are needlessly hidden. What's your workflow that you find GitLab's UX to be nicer than Gitea's?
For instance I just got tripped trying to sign out of my gitea instance since the mobile design has two identical looking avatar + username blocks on top of each other, one being the org switcher the other being a menu (with no indicator) with the sign out button.
I went to a project page, and it auto focused the search input (???), causing a zoom in on mobile.
I just prefer the design / look + feel of gitlab more than gitea/forejo. It's not really a hot take, gitlab has been around a lot longer and has much more support.
That was my take too. It is a big project with a lot of functionality. But, I never needed all of that functionality, so it just seemed bloated to me. I switched over to Gitea for self-hosted code repositories (non-public repos behind a firewall) a while back and haven't had any issues thus far.
You can pin those you want in the left menu
I found gitea's interface to be so unusably bad that i switched to full-fat GitLab.
Gitea refused to do some perfectly sensible action- I think it had something to do with creating a fork of my own repo. Looking online, there's zero technical reason for this, and the explanation given was "this is how GitHub does things". Immediately uninstalled. I'm not here for this level of disrespect.
I found gitea's interface to be so unusably bad that i switched to full-fat GitLab.
Was this Gitea pre-UI redesign or after? 1.23 introduced some major UI overhauls, with additional changes in the following releases. Forejo currently represents the Gitea 1.22 UI, reminiscent of earlier GitHub design.
I find Gerrit to also be very low maintenance.
If you have a high-availability or multi-site set-up you also don't need to take any downtime to upgrade.
https://forgejo.org/docs/latest/user/actions/basic-concepts/
It's a shame that GitHub won the CI race by sheer force of popularity and it propagates its questionable design decisions. I wish more VCS platforms would base their CI systems on Gitlab, which is much much better than GitHub actions.
The CI job definitions for sourcehut are a pleasure to use: https://man.sr.ht/builds.sr.ht/manifest.md
A really neat feature is that you can also trigger a job by just submitting a yaml file (with the web interface, the API or the cli) without needing to push a commit for each job. This is neat for infrequent tasks, or for testing CI manifests before committing them.
Both are yaml jungles, I hate them equally.
What exactly is the advantage of running something like GitLab vs what I do which is just a server with SSH and a file system? To create a new repo I do:
Then I just set my remote origin URL to example.com:repos/my-proj.gitssh example.com ‘mkdir repos/my-proj.git && cd repos/my-proj.git && git init —bare .’The filesystem on example.com is backed up daily. Since I do not need to send myself pull requests for personal projects and track my own TODOs and issues via TODO.md, what exactly am I missing? I have been using GitHub for open source projects and work for years but for projects where I am the only author, why would I need a UI besides git and my code editor of choice?
CI runners would be the main advantage of GitLab over bare Git, I think. Also if you want to show other people your personal project at some point, it may be nice to be able to link to a diff or a historical version of a file that they can see in a browser. Or just a syntax-highlighted file or a rendered Markdown or Jupyter file. Also previous release tarballs.
What exactly is the advantage of running something like a restaurant vs what I do at home which is just cook it myself?
-> convenience, collaboration, mobility
Personal projects that you work on by yourself do not need collaboration. I feel like I pretty clearly implied that in my comment.
We at $DAYJOB had an internal git server that was literally what the parent of this comment mentioned (`git init --bare`). It became a little cumbersome, so when I stumbled across forgejo, I was happy to see that importing the existing git repos was a breeze, just had to point the config to look at the existing git storage root and assign groups and permissions via the GUI.
Collaboration and specifically collaboration with non git nerds. That's primarily what made GitHub win the VCS wars back in the day. The pull request model appealed to anyone who didn't want to learn crafting and emailing patches.
Yes, it's the PRs, and there is a misunderstanding I think because the OP and the GP's use-cases are quite different. Self-hosting your own repository on a remote server (and perhaps sharing it with 1 or 2 collaborators) is simple but quite different than running a public open source project that solicits contributions.
I specifically was talking about “personal projects” and excluded PRs for the reason that I would be the only contributor.
I’d argue they if you can’t prepare a patch diff then your abilities as a contributing developer should be thoroughly questioned.
Yes, the projects that are emailing patches around generally have a much higher bar then the ones that accept GitHub PRs, but whatever works for a given project I guess
You don’t! Forges are for collaboration outside of the rhythm of git commits. You’re happy to make a new commit every time you have something to add to an issue. With X issues and Y comments a hour, polluting the git timeline with commentary is going to become unhelpful.
Some forges even include(d) instant messaging!
This is kind of like asking what the point of Dropbox is when we have rsync. Rsync is nice, but most people won't know how to use it.
Setting up a server with SSH and GitLab is more work than setting up a server with SSH. Dropbox is great and I use it but only because I can’t get the same functionality out of rsync without major additional orchestration. But if I am the only one working on my own project why would I need a second read-only UI for my own code?
If you're working alone you can also send raw IP packets down the wire by way of telegraph key if you'd like. What you do alone behind closed doors isn't really anyone's business and is up to you. For everyone else, the benefit of using Gitlab is that once it's set up, a wide range of users of varying skill levels and backgrounds can use it to collaborate.
> why would I need a UI besides git and my code editor of choice?
If you ever find yourself wishing for a web UI as well, there's cgit[1]. It's what kernel.org uses[2].
[1]: https://git.zx2c4.com/cgit/ [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
I wish I could search in Gitlab like I could in Jira's SQL-esque query syntax, but otherwise its interface is a step-up, if still pretty "busy" for my taste.
If you want even more minimal, Gerrit is structured as a Java app with no external dependencies like databases, and stores all it's configuration and runtime information on the filesystem, mostly as data structures in the git repos.
Shared filesystems is all you need to scale/replicate it, and it also makes the backup process quite simple.
I might be one of the few that is intrigued by this being that it’s Java but this looks really neat. Does it do git repositories like gitea, GitHub, etc, or is it more of a project management site for the repositories? They describe it as “code review”, so I wasn’t sure.
I’m a little put off on the google connection but it seems like it could run rather independently.
It necessarily hosts a git server (using jgit), but the primary interface is as a code review tool.
even browsing the git repos it hosts uses an embedded version of another tool (gitiles).
https://gerrithub.io/ is a public instance
It's hyper-focused on code review and CI integration, which it does really well.
It's not focused on all the other stuff that people think of in code forges (hosting the README in a pretty way, arbitrary page hosting, wiki, bug tracking, etc.) but can be integrated with 3rd party implementations of those fairly trivially.
> I’m a little put off on the google connection but it seems like it could run rather independently.
Yeah, its actually a really healthy open-source project, google contributes usually around 40% of the code, but you have other companies like GerritForge(disclaimer, I work here), Nvidia, SAP, Qualcomm, Wikimedia foundation, all contributing heavily to it.
The deployment may be simple, but at the same time, the Gerrit code review workflow is terrible.
Coming from Github myself, I cannot imagine going back to it after using Gerrit for even just a few days.
The workflow in Gerrit really makes a lot of sense, unfortunately its the workflow in GitHub that has screwed up everyone's idea of what code review should look like[1], even by one of GitHub's co-founder own's admission.
[1] https://medium.com/@danielesassoli/how-github-taught-the-wor...
I personally find the rebase and stacking commit focused method of integration that Gerrit uses to be easier and cleaner than PR's in GitHub.
Having done CI integrations with both, Gerrit's APIs send pre- and post-merge events through the same channel, instead of needing multiple separate listeners like GitHub.
We've been looking at Forgejo too. Do you have any experience with Forgejo Actions you can share? That is one thing we are looking at with a little trepidation.
I setup actions yesterday. There are a few tiny rough edges, but it is definitely working for me. I'm using it to build my hugo blog which "sprinklylls" in a Svelte app, so it needs to have nodejs + hugo and a custom orchestrator written in Zig.
What I did:
* used a custom docker image on my own registry domain with hugo/nodejs and my custom zig app * no problems * store artifacts * required using a different artifact "uses" v3 instead of v4 (uses: actions/upload-artifact@v3) * An example of how there are some subtle differences between GitHub Actions, but IMHO, this is a step forward because GitLab CI YAML is totally different * can't browse the artifacts like I can on gitlab, only allows download of the zip. Not a big deal, but nice to verify without littering my Downloads folder. * Unable to use "forgejo-runner exec" which I use extensively to test whether a workflow is correct before pushing * Strange error: "Error: Open(/home/runner/.cache/actcache/bolt.db): timeout" * I think GitLab broke this feature recently as well! * Getting the runner to work with podman and as a service was a little tricky (but now works) * Mostly because of the way the docker socket is not created by default on podman * And the docker_host path is different inside the runner config file. * There are two config files, one (JSON) is always stored in .runner and contains the auth information and IP, and the other is YAML and runner needs the -c switch to specify it, and has the config of the runner (docker options, etc). It's a bit strange there are two files IMHO.> * Strange error: "Error: Open(/home/runner/.cache/actcache/bolt.db): timeout"
This will occur if you have a `forgejo-runner daemon` running while you try to use `exec` -- both are trying to open the cache database, and only the first to open it can operate. You could avoid this by changing the cache directory of the daemon by changing `cache.dir` in the config file, or run the two processes as different users.
> It's a bit strange there are two files IMHO.
The `.runner` file isn't a config file, it's a state file -- not intended for user editing. But yes, it's a bit odd.
We use them in our shop. It's quite straightforward if you're already familiar with Github Actions. The Forgejo runner is tiny and you can build it even on unsupported platforms (https://code.forgejo.org/forgejo/runner) e.g. we've setup our CI to also run on Macs (by https://www.oakhost.net) for App Store related builds. It's really quite a joy :)
Are you building MacOS apps? More specifically, are you doing code signing and notarization and stamping within CI? If so, is this written up somewhere? I really struggled with getting that working on GitLab. I did have it working, but was always searching for alternatives.
One concern the post brings up - single point of failure. Yes, in this case, blah blah big company microsoft blah blah (I don't disagree, but..). I'm more worried about places like Paypal/Google/etc banning than the beast from Redmond.
Self hosting, it's still a single point of failure and the article arguing "mirroring", well... it allows redundancy with reads but writes?
It's an interesting take on a purist problem.
Redundancy for read access to the source code is a concern for Dillo. Some years ago, the domain name registration lapsed, and was promptly bought by an impersonator, taking the official repository offline. If it hadn't been for people having clones of the repository, the source code and history would have been lost.
How do people find your online project and know it's you (instead of an impersonator) without relying on an authority, like GitHub accounts or domain names? It is a challenging problem with no good solution. At least now the project is alive again and more resilient than before.
I think it’s a fair concern, e.g. forgejo is a simple directory on disk, with an option to make that into an S3 storage. It really is a no brainer to set that up for as much resilience as necessary with various degrees of “advanced” depending on your thread model and experience. The lack of a FAANG/M in the equation makes it even more palatable.
I found the banning comment to be odd. That said, all it really takes is a policy change (something that I see as far more likely in Microsoft's case) or simply a change in the underlying software (again, somewhat likely with Microsoft) for the platform to become unusable for them. Keep in mind that Dillo is a browser for those who can't on don't want to fit into the reality of the modern web.
- [deleted]
I used, administered, setup, and customized many on prem gitlab instances for years. I gitlab doesn't memory leak, you're making that up. It's exactly as resource intensive as the number of resources you setup. Can't say the same for JIRA et al.
This comment makes me suspect this entire thread as some astroturfing for that other product.