NobodySpecial's Blog (the shitty version)
A mirror of https://git.envs.net/NobodySpecial/Blog
RSS Feed

Git Commit Signing

I recently stumbled upon an article about git commit signing that I take issue with. The article discusses its security properties from the perspective of the developer, but there's another approach that must also be considered. The primary mistake made by the author is assuming commit signing has anything to do with protecting the developer of a project. As such, it approaches security from the project's developer's threat model. However, this fails to account for the fact that a project's users are the people who are fucked if the developer fails to secure a project. As such, the developer's threat model is irrelevant - if a user who wants to protect themselves against compromised infrastructure, the developer must accomodate. By assuming their own threat model, a developer undermines the security of any user who doesn't share the same threat model, even with trivial or non-security-critical codebases.

Let's say a developer releases some simple project. It only handles trusted inputs, doesn't expose any public-facing service, doesn't create significant attack surface, and the project itself is not security-critical. There's no way it could possibly be exploited, right? The project is used by someone who needs to protect themselves against infrastructure attacks. What could go wrong?

Well, if infrastructure attacks are in the user's threat model, failing to provide some sort of cryptographic integrity proof allows an attacker to modify data sent to the user, effectively turning the codebase into malware. It doesn't matter how secure the project's source is, because the delivery mechanism is compromised.

This all comes down to one basic principal: the developer can't assume to know their users' threat models. What ends up happening is users end up having to avoid software that fails to provide sufficient cryptographic proof of integrity, or must manually verify integrity through some other means. It's a massive pain in the ass to try and make sure your system is only running legitimate code if infrastructure compromises are within the threat model, and nobody realistically has time to read the source of everything they install. Especially considering backdoors are so easy to hide. If you don't believe me, just see underhanded-c.org (warning: that site doesn't support HTTPS). By failing to provide cryptographic integrity proofs, developers force users to waste a ton of time, when all that's needed for the developer to solve the issue on their end is to enable a single setting and publish their public key. I've avoided using (and recommending) projects in the past because of this issue.

If a developer fails to sign their source, there's no excuse that allows me to take them seriously. I don't give a rat's ass what a project's developer thinks is secure enough. The developer isn't the person who gets malware if they fail to meet security standards. If you're not allowing users to verify the integrity of your source in some way (not just precompiled builds, since many users will want to self-compile with additional mitigations for added security), a single infrastructure compromise will turn your amazingly well-designed open-source project into a self-compiled trojan.

For most of this article I've been assuming compromised infrastructure is a risk reserved for high threat models, but that's also not the case. We've known the risk of infrastructure compromises for decades - there's a reason Linux package managers verify the signatures of packages before installation. Why would developers not do the same for their source releases?

TL;DR stop undermining my security, you lazy fucks.