Rust Build Systems

There was a recent post that was linked to in This Week in Rust that delves into some experiments with building Rust code via Meson rather than Cargo. Meson is, for those who don't know, the officially blessed build system for Gnome which is written and configured in Python. I'm not going to express an opinion here on Meson itself other than to say it is most definitely an improvement over autotools, but I am going to express a strong opinion on replacing Cargo for building Rust projects.

Please don't. Just stop, right now.

On building C and C++

There have been, over the years, a few attempts at automating what would seem to be a not that difficult task. That's both a dramatic oversimplification and unerstatement. The number of different build systems out there is crazy and confusing to say the least. It shows quite dramatically one of the worst outcomes of Open Source - NIH syndrome. That is to say, an awful lot of developers have decided over the years that nothing worked quite in the way that they wanted and so went through the process of creating their own solution, with the outcome that there was now yet another build system to learn which may or may not be actively maintained and which may or may not fit someone else's use case.

A brief listing

  • make
  • PLan 9 mk
  • cmake
  • smake
  • imake
  • qmake
  • redo
  • Scons
  • Waf
  • Ninja
  • Meson
  • Apache Ant
  • Kbuild
  • Autotools with Make
  • BitBake
  • Maven
  • MSBuild
  • Bazel
  • gyp
  • jam
  • mach
  • Random POSIX-ish shell scripts

That's actually a rather incomplete listing. There are plenty more, and if you package software for a Linux or other Unix-like system you will encounter plenty of one offs and variants. To make matters worse, there are major differences between GNU make and BSD make, with a small and badly defined subset of features that could be considered 'POSIX compatible', and when you get into Autotools you often have to deal with packages which are using truly ancient versions of the various scripts which have not been updated in years. I've seen this firsthand with symptoms like a compiler's target triple not being recognized by Autotools as a valid target.

One thing that a lot of these build tools have in common is that in an effort to make things simpler they quite often break things such as cross compilation entirely. This is a huge pet peeve of mine. All it takes is one careless insertion of a hard coded path and all of a sudden the compiler is trying to link against the libraries on the host system, which is a completely different architecture than the target. It's a simple mistake. It's a rather stupid mistake. It's also a ridiculously common mistake.

A case for plain Makefiles

I really want anyone who has only developed using Linux to take a look at the various BSD source trees. Take a nice long, hard look. Especially take a look at NetBSD. It builds on almost anything with a vaguely POSIX compatible /bin/sh. Now, NetBSD wraps it's Makefile based build system using a script, but that script is only there as a nice user interface so that you can have things like command line flags to tell the Makefiles which of the many targets you want to build for. FreeBSD and OpenBSD build everything using plain Makefiles. It works. Every. Time.

Modern compilers have gotten pretty damn good. I can bootstrap the GNU toolchain as a cross compiler in about a half hour, and that toolchain knows where it's sysroot is located. It already knows where to find the system headers and library files it's supposed to build against. I only run into issues with those particular concepts when someone's build system breaks things.

The majority of the problems that Autotools was designed to fix, and to a much larger extent Libtool, just don't exist anymore. Some of them quite frankly never did. Autotools was designed to address portability and compiling for a range of different systems with different interfaces and capabilities. Fair enough. But for a very large subset of those different interfaces and capabilities there exist alternate interfaces that could have been used which would make the code more portable without having to perform checks at compile time.

Now for something that is definitely going to be a controversial statement. Checking to see if libraries are installed on the host in a separate configuration step is a complete waste of time and processor. As I said earlier, compilers have gotten pretty good. Another of the ways that they've gotten better is by providing good error messages. If a library doesn't exist you'll know when the linker fails. If the error messages aren't clear enough for you to figure out what the problem was, that's a bug. File the freaking bug report. Maybe even send a patch. Everyone's life will be improved.

Some people might want to make use of features in Glibc if we're compiling for Linux that don't exist on BSD. Well, just don't. It can no longer be assumed that a Linux system is even based on Glibc, as we have a number of distros built on Musl. You might take the attitude that it should be up to those guys to fix it since they're using something non-standard. Think again please. Linux is entering the mobile arena bit by bit, and we want to make that happen and encourage it. One of the most successful projects in this space is Postmarket OS, which is based on Alpine Linux, which is (drumroll) built against Musl. In other words, stick to POSIX. Keep it portable without ifdefs wherever possible.

Additionally, Alpine is one of the most commonly used distros for container images. Again, stick to POSIX.

Anyway, portability can quite often be achieved without a rats nest of ifdefs, even when a project is written in C or C++. Trust the compiler to tell you what the problem is when you hit a snag. You might be surprised at how good the results can be.

Back to Rust

Probably the most compelling reason to use Rust is not actually memory safety. That statement is debatable, but I truly believe it myself. No, the most compelling reason to use Rust is the amazing tooling. Everything from the brilliant error messages to the included formatter to the linter to the integrated testing is awesome and included out of the box. It's also integrated into a nice comprehensive package using, you guessed it, Cargo. The fact that there is one, and only one, blessed way to compile Rust code is a feature. It's a great feature if you ask me, and one that I'd appreciate people not messing with. If a project is written in Rust, I know that I can pull the source and compile it with Cargo. There are very few exceptions. It would be better if there were zero exceptions.

I would submit that instead of trying to compile Rust code using Meson, your time would be better spent either integrating Cargo into Meson or adding whatever features you think that Cargo is missing. Cargo has a great plugin system. You don't even have to push your fixes upstream if that's too much hassle, just let your plugin live as a separate binary crate.

Addressing some things Cargo lacks

One thing that the linked post above mentions as a weakness of Cargo is using more than one language in a project. Interestingly, he also gives the solution. Compile the Rust code as a static library and link it to the code written in other langauges. I don't see a problem.

How about installing assets? Well, first off those should be kept to a minimum in most cases anyway, but obviously not everyone feels that way. I've been using a pattern called xtask for a while to do this on a number of projects. Some of the cool things that I've gotten Cargo to do this way include:

  • Generating shell completions
  • Generating Unix man pages
  • Generating PNG icons from an SVG
  • Installing all of the above plus documentation into a DESTDIR for packaging

I've been massaging that process a bit lately and am now using my package-bootstrap crate going forward. This crate allows you to generate an additional bootstrap binary at compile time which takes care of the above tasks and can install everything either into your actual filesystem or into a DESTDIR for packaging. It does increase compilation time somewhat since you're building a second binary, but this is somewhat mitagated by the fact that the bootstrap binary and your actual binary can share code, unlike the xtask pattern. The crate also has feature flags to turn various bits of functionality on and off, so it doesn't have to compile the bits that aren't going to be used.


mod cli;
use {
    clap::{Arg, Command},
    std::{error::Error, path::PathBuf},

fn main() -> Result<(), Box<dyn Error>> {
    let matches = Command::new("bootstrap")
        .about("install the software")
        .author("Nathan Fisher")
                .help("the architecture of the binary to be installed")
                .help("the output directory for the installation")
    let outdir = matches.get_one::<String>("output").unwrap().to_string();
    let outdir = PathBuf::from(&outdir);
    let arch = matches.get_one::<String>("arch").map(|x| x.to_string());
    Bootstrap::new("hpk", cli(), &outdir).install(arch, 8)?;
    Bootstrap::new("hpk-init", cli::init(), &outdir).manpage(8)?;
    Bootstrap::new("hpk-create", cli::create(), &outdir).manpage(8)?;
    Bootstrap::new("hpk-install", cli::install(), &outdir).manpage(8)?;
    Bootstrap::new("hpk-info", cli::info(), &outdir).manpage(8)?;
    Bootstrap::new("hpk-search", cli::search(), &outdir).manpage(8)?;
    Bootstrap::new("hpk-remove", cli::remove(), &outdir).manpage(8)?;
    Bootstrap::new("hpk-upgrade", cli::upgrade(), &outdir).manpage(8)?;

This is an actual example from one of my current projects. The cli module contains the definitions for the clap::Command's that the actual binary uses, and that module is shared between the two binaries. Clap is already being compiled for the project binary, and that compiled code gets linked into the bootstrap binary without having to be compiled again. The system is aware of cross compilation and looks in the appropriate directory under target to find the binary to be installed.

One of the really awesome side effects of installing all of your assets via a custom compiled binary is that it leverages all of the cross platform abstractions that are already provided by Rust, rather than relying on what tools may or may not exist on the system. You don't need to check what flags install accepts because you're copying the files using Rust code. You're not shelling out to an external tool to convert an svg to a png because the bootstrap tool itself can read svg and write png. In other words, your build is not going to fail in unexpected ways if you didn't account for some difference in the available tooling for every supported system. This comes at the cost of having to write a little more Rust code (which you obviously know how to do if you're writing a project in Rust) and in a slightly longer compilation. I think the benefits dramatically outweigh the drawbacks.

Interestingly, this is the approach taken and promoted by the Zig language, which includes a full module in it's std library dedicated to building Zig and C code. Every Zig project has a build.zig file which is compiled first, then is run as the build runner for that program. Zig takes the stance that the build system should be written in the same language as the program which is being built. I think the idea has a lot of merit.

I'll admit that what I've described could also fall under the NIH moniker. I'm sure that there are other possible solutions and I welcome feedback and especially collaboration. One of the more compelling solutions is to just use Cargo for what it was intended, that is to say compiling Rust code, and then wrapping that in your preferred build/install runner to actually install the binary along with any other artifacts. Such a system is not, at least in my opinion and experience, all that difficult to understand or maintain. This is directly in opposition to what Slomo has to say on the matter in the post I linked above, and I think it's just safe to say we disagree. He's obviously comfortable with Meson, and that's fine.

On Gnome specifically

I have some experience with targeting Gnome as a platform using Rust. The small application Gfret that I wrote as a way to learn Rust is one example, as I'm just about to release the results of porting from a generic Gtk4 program to Gtk4 + LibAdwaita with the goal of integrating into Gnome. Another example would be my Gemini browser, Eva. Both of these programs I've been rethinking over the past few months after having a great experience with my little text editor in Vala.

Something that is provided by the Gnome stack that Cargo might not handle out of the box are GResources. For those who don't know, I'm referring to a virtual filesystem that gets compiled into a Gnome or Gtk+ based application which contains resource files such as ui definitions in XML format, icons, and splash screen images. The gio package contains a program called glib-compile-resources which will compile a list of files into a binary object for you, which then gets linked into the application. I don't personally have a need for such data, as I just include my ui definitions as strings using Rust's include_string() macro, only use commonly available system icons and am not producing applications which might benefit from a splash screen due to taking a long time to load. That said, there is literally nothing stopping you from shelling out to glib-compile-resources from a and linking it into your code.

Similarly, if your application uses GSettings then you could do something akin to what I described above with my package-bootstrap crate and shell out to glib-compile-schemas as a later step in your installation.

Some specifics relating to Slomo's article

Slomo mentioned some specific deficiencies in Cargo and the fact that they do in fact cause issues with scripts in numerous crates.

  • Checking which version of the Rust compiler is used and based on that enabling/disabling various features

I'm not going to deny this is a problem, but crate authors are making it into more of a problem than it should be. Like Cargo itself, there is an official, blessed way of getting and installing a Rust toolchain - Rustup. Use it. There's literally no good reason to use an out of date toolchain. It's bad practice. I would submit that it follows that it is bad practice to support building crates with severely out of date toolchains, particularly if it requires a lot of feature checks at compile time to enable or disable features. What are we, savages? We may as well be abusing the C preprocessor. I'm sorry, but this one is an easy fix if the wider Rust community will collectively stop trying to appease people who aren't following best practices as defined by the Rust project itself. I know a lot of people don't want to use Rustup. Don't cater to those people. Everyone will benefit. If your objection to Rustup is along the lines of not wanting to pipe curl into a shell, then you can A) download the installer, read it and decide whether or not you want to install it or B) download the rustup source and compile it yourself. I'm not dismissing those types of concerns, I'm simply stating that people are using them as an excuse to criticize the Rust project on the whole rather than taking the totally obvious steps that I've described in A and B.

  • Checking features or versions of the underlying platform or operating system

I already addressed this above, so I won't rehash much. Use the abstractions that Rust provides, find other ways to make your code portable, and trust your compiler to give great error messages. There is good portable code and bad portable code. We've all seen the bad.

  • Checking for existence of native, external libraries or even building them

If the person compiling your crate can't be bothered to read a README or INSTALL file and install dependencies then that is, frankly, their own fault. If you want your crate to support compiling a third party library, by all means do that, but don't check for it during compilation. Rather, stick it behind a feature flag and document it using the fantastic Markdown documentation support built into Rust.

At any rate, while it is valid to criticize the overuse of files it's basically symptomatic of the same disease among developers which lead to rampant preprocessor flagellation back in the heydey of C and C++. We like being clever, when we should instead shoot for robust and simple. This is not a problem that exists because Cargo lets us use a file, but rather a problem that exists because people see such a feature and think, "my God, imagine what I can do with that!" Don't let people's abuse of a feature lead to the mistake of declaring that feature to be bad, particularly if there are valid uses for that feature.

Together with all this, some thought will also have to be put into how to locate such Rust dependencies, similar to how pkg-config allows to locate shared libraries. For example, Linux distributions will want to package such dependencies and make sure that a project built for such a distribution is making use of the packaged dependencies instead of using any other version, or worse downloading some version from the Internet at build time.

This is once again the sort of thinking that is clearly held over from the era of C and C++. In particular the idea of managing dependencies similarly to how pkg-config works. Rust builds using Cargo are generally reproducible so long as Cargo.lock is checked into version control. The majority of Rust developers likely would not support Linux distributions attempting to make their code work with arbitrary versions of their dependent crates, as this would quickly turn into a nightmare situation. In particular, a distribution which is focused on stability (I'm looking at you, Debian) would likely package versions of crates which are ancient in development terms, leading to developers getting spurious bug reports for bugs which they would never have encountered or be able to reproduce in their normal workflow because their code was never intended to compile against those library versions. This is exactly one of the major issues that something such as Cargo is designed to eliminate.

Even in the C world, it is exceedingly common to vendor and pin dependencies in larger projects. Take a look at the Chromium or Firefox source trees and you'll see dozens of vendored third party libraries in-tree. That's a fact of life, unfortunately. That said, in the C world those deps are generally managed in an ad-hoc manner, whenever someone actually gets around to attempting to update, or maybe never. Cargo at least provides some tooling around this process, and responsible crate authors will generally keep their dependencies relatively up to date.

At any rate, until the ecosystem has matured even more I seriously doubt that you're going to be seeing a or libdiesel.dll anywhere. Linux packagers should get on board or just skip packaging Rust projects.

Summing up

Cargo is the canonical and best way to manage and build Rust code. Attempts to introduce another build system are quite likely to fall flat with the majority of Rust enthusiasts, and at worst will likely be met with outright hostility. There are valid reasons for this. The alternative to having every Rust project build it's Rust code using Cargo is to invite the sort of incomprehensible, impossible to keep up with mess that one sees in other ecosystems. Please, just don't.