Web-based cryptography is always snake oil
Nowadays, there is an epidemic of web applications purporting to offer “end-to-end” encryption. Examples might range from a file upload service, which allows you to upload and share files of arbitrary size and promises “end-to-end encryption”; or a web-based password safe service which claims that it can't see your passwords because they're encrypted; or a web-based cryptocurrency wallet.
The cryptographic claims made by these services are invariably nonsense. Indeed they necessarily must be, because the web as a platform does not possess the necessary functionality which would allow otherwise.
Fundamentally, all web-based cryptosystems are incoherent because they suffer from an incoherent threat model.
Let me start by coining a law, which is both obvious and yet, to my knowledge, novel and overdue:
- A cryptosystem is incoherent if its implementation is distributed by the same entity which it purports to secure against.
It is inherent to the model of the web platform that the code which implements a client-side web application is distributed by the given website. Thus the client-side code is always distributed by the operator of the web server.
In other words, web-based “E2E” applications claim to secure against malice on the part of the server operator using encryption implemented in client-side JavaScript, but this is obviously not true, since if the server operator was malicious, they could just push different client-side JavaScript. (Conversely, entities other than the server operator are secured against via use of TLS, so there is no additional benefit to “E2E” if you trust the server operator.)
The web platform does not contain any functionality which could be used to separate this relationship (e.g., to distrust server operators for the purposes of what client-side code can execute for an origin), so this problem is intrinsic to any attempt to implement “E2E” encryption in a web application. There are no exceptions.
It is worth noting that this law also applies to non-web applications where the service provider supposedly being secured against is also the client software distributor; thus, the “end-to-end encryption” offered by Whatsapp and Signal, amongst other proprietary services, is equally bogus. (Both Whatsapp and Signal ban use of third party clients, and enforce this policy.)
By any normal definition of the term, a cryptosystem is backdoored if the vendor of a cryptosystem retains the ability to bypass it after its deployment. In this regard all web-based “E2E” cryptography can be regarded as backdoored, as can Whatsapp, Signal, etc.
A cryptosystem is incoherent if its implementation is distributed by the same entity which it purports to secure against.
Cryptography theatre: Snake oil cryptography as a legal technology
This, of course, raises the question: why is this snake oil crypto so popular? Why have companies like Meta spent substantial amounts of money equipping messenger systems like Whatsapp with snake oil crypto? The actions of these companies can't be understood from the perspective of offering actual security, because their actions aren't congruent with this objective and their threat model is incoherent.
The real motive for the popularity of “E2E” amongst tech companies actually turns out to be fairly obvious: the purpose of adopting snake oil “E2E” is not to deliver actual security, but to act as a kind of cunning legal maneuver to exempt themselves from the usual legal obligation to honour warrants, subpoenas and other court orders. By nominally adopting “end to end encryption”, these companies can theatrically throw up their hands when the government comes to them with a warrant: “Well, we'd love to help you, but there's nothing we can do!”
After all, warrant processing is simply an annoying cost centre and liability for these companies. By simply casting the spell of applying snake-oil cryptography to their product, a company can magically exempt itself from this entire category of legal obligation, reducing both their costs and their potential legal liabilities.
The problem is, of course, that “there's nothing we can do” isn't true. The service provider could develop and ship a backdoored version of the client software. The actual gambit the service provider is counting on here is a particular legal theory: “we could change the software so as to be able to process this warrant, but you can't make us do so”.
You could perhaps call this cryptography theatre. The purpose of cryptography theatre is not to deliver actual security from a cryptographic perspective but act as a kind of magic spell which (the user believes) gives them a magic opt out from the obligations conferred by warrants and subpoenas. Thus, cryptography theatre must fundamentally be understood not as a cryptographic technology but as a legal one.
There are several problems with this legal “magic spell”. In particular, this legal theory (“you can't make us compromise our system”) seems to be relatively specific to US case law and the typical company offering snake-oil cryptography is based in the US. There are some points of constitutional law which seem to act in favour of this argument, like the fact that the First Amendment is supposed to preclude compelled speech just as much as the prohibition of speech, and there is case law that code is a form of speech. These sorts of protections are unlikely to be effective in other countries.
Moreover, the US government clearly does not agree with this interpretation, as it has repeatedly argued that it can in fact force an entity to compromise a cryptosystem it distributes. There are at least two such cases:
The Lavabit incident: When Edward Snowden leaked the Snowden documents, he used a Lavabit email address. Lavabit was an email service boasting web-based (and therefore snake-oil) encryption functionality. The FBI attempted to coerce the owner of Lavabit to compromise the client-side code to allow decryption of Edward Snowden's messages. The owner, rather honourably, chose to shut down the entire service rather than comply (which is to say that they had the ability to comply).
I don't know what legal options the owner of Lavabit explored. It is possible they took legal advice and concluded they had an obligation to comply, but also possible they simply decided they didn't have the budget or appetite for complex and extended litigation against the US government. I suspect the latter.
The FBI—Apple case: In which the FBI tried to use US law to coerce Apple to make a version of the iOS firmware which was compromised so as to enable them to decrypt a suspect's iPhone. The FBI eventually withdrew before a decision was made, after gaining access to the iPhone in question another way. It is fairly clear they don't intend to let this matter settle, however.
(It is worth noting that Apple's defence fundamentally was of the argument that we don't want to and you can't make us, not that they did not have the ability to. Since the security controls the FBI was complaining about in this case are ultimately enforced by Apple-signed proprietary firmware, Apple could create firmware defeating these controls if it wanted to, which the FBI was seeking to force it to do. In this regard, iOS also can be regarded as backdoored, as the vendor retains the ability to compromise the system.)
In short, the problem with relying on this sort of thing is that it can be unmade by a new legal ruling at any time, which is a much weaker standard than we are generally seeking to obtain when adopting cryptography. Again, cryptography theatre must fundamentally be understood as a legal technology, not a cryptographic one.
The final reason why relying on this legal “magic spell” is not safe is because the assumption that governments are meaningfully bound by law is demonstrably overwhelmingly false. Governments routinely coerce companies into helping with their surveillance initiatives in ways that subsequently turn out to be illegal. In fact this is not even speculative in light of past programs like PRISM, which targeted companies such as Facebook (now Meta); thus it's basically a given that these companies remain compromised and retain a working relationship with governmental intelligence services.
Reiterating the above, as a dialogue
Alice: I'd sure like to talk to Bob sometime. If only there were some kind of communications system that allowed me to do that with him on the other side of the world...
Eve: Hey there!
Alice: Oh, hello.
Eve: Our state of the art EveMessenger instant messaging system will let you talk to Bob in no time.
Alice: Oh, wonderful! Let me just send him a message...
Eve: Whoa, wait!
Alice: What?
Eve: Aren't you worried I might, you know, eavesdrop on your message to Bob?
Alice: Why, would you? Can't I trust you?
Eve: Oh, but of course you can. In fact, we're incredibly trustworthy. You won't believe how trustworthy we are.
Eve: But, just to be sure, take this software. It'll encrypt your communications with Bob so that even we can't see them.
Alice: ...Oh, neat. Thanks!
Alice: ...But hold on... you supplied this software.
Eve: Of course.
Alice: So how does it prevent you from seeing my communications?
Eve: It encrypts everything you send before it reaches us. We can't see a thing!
Alice: But this software auto-updates, right?
Eve: Right.
Alice: So you could update it at any time.
Eve: Right.
Alice: So if you ever wanted to spy on my conversation, what's to stop you from just pushing an update to undermine the encryption?
Eve: Ah... well... you know, that's just paranoid. Why would we ever do that?
Alice: In other words, it can't secure my communications against you in case you turn out to be untrustworthy.
Eve: Well... yes...
Alice: So what exactly is it supposed to be securing against?
Eve: OK, you have to trust us, but what about other people? There's all sorts of people trying to eavesdrop on things. So you have to trust us, good old Eve, but nobody else, at least!
Alice: Hold on a minute.
Alice: Even without this special software, the channel between my computer and your messaging system is securely encrypted, right?
Eve: Yes, that's true...
Alice: And the same is true of Bob's channel between his computer and your messaging system, right?
Eve: Yes...
Alice: So even without this special software, nobody else would be able to eavesdrop on my messages. So if this special software doesn't prevent any third party from eavesdropping on my messages who wasn't already prevented from doing so, and it doesn't prevent you, the service provider from eavesdropping on my messages, who does it prevent from eavesdropping on my messages?
Eve: Uh...
Eve: ...
Eve: ...
Eve: ...
Eve: OK, look. You got us. Our “end-to-end” encrypted messaging system makes no sense from a technical perspective.
Alice: I'm amazed you're admitting this, but points for honesty.
Eve: We didn't actually adopt this thing to secure things from a technical perspective. Like you say, it makes no sense. Or a cryptographer would say, the threat model is incoherent. It makes no sense for a cryptosystem to be designed to secure against the same entity who provides the software implementing it. Actually, we had a different motivation for implementing it like this...
Alice: What motivation was that?
Eve: We don't actually want to eavesdrop on you — just assume for the time being that that's true. But there are other entities which might — governments, for example, who come to our door with a warrant.
Eve: Actually, to tell you the truth, it's not even really about not wanting to eavesdrop on you. We don't actually care too much about you — who are you anyway? — but frankly, processing all those warrants was turning into a major cost centre. We came to the conclusion that being able to eavesdrop on our users was more of a liability than an asset.
Eve: But with this bizarro encryption system, it's great. When they come knocking with a warrant, we just say “we can't, it's encrypted” and they go away!
Alice: In other words, you made this “end-to-end encrypted” system not to secure against the possibility of you, the service provider, being malicious, but as a sort of legal loophole to exempt yourself from dealing with warrants.
Eve: Pretty much.
Alice: Somehow I find your blatant opportunism almost charming...
Alice: But what happens if the government just demands you ship a version of the software that compromises this encryption scheme? And puts a gag order on you so you can't tell anyone?
Eve: Don't worry, they can't do that.
Alice: Can't they? Why not?
Eve: Well, we think there's enough technicalities in the law that they'd have great difficulty doing that. Probably.
Alice: Didn't the FBI recently bring a case against Apple arguing that Apple should be forced to make a special firmware update that undermines the security of their own phones so that the FBI could break into one of them?
Eve: Yes, they did...
Alice: The FBI withdrew that case, but they seemed quite sincere in their litigation. All it would take is one piece of case law and the government could come demanding you do the same thing.
Eve: Well... maybe...
Alice: Nor is this the first time the government has rejected this premise. Lavabit was a web-based email service which used the exact same model as you. Emails were encrypted using JavaScript in the web browser, so that Lavabit supposedly couldn't access them. But of course, that JavaScript was served from Lavabit's own website.
Alice: One famous user of Lavabit was Edward Snowden. After he fled the US, the FBI went after Lavabit. After the owner of Lavabit explained how everything was encrypted, the FBI started to demand he modify his website to compromise the encryption. In other words, the US government was totally happy to demand Lavabit ship compromised encryption software. Eventually, the owner of Lavabit chose to shut down the entire service rather than compromise it, which is to his credit, but leaves aside the fact that he could have done it, and that the US government has repeatedly tried to force people to do this kind of thing, both in the Apple and the Lavabit cases.
Eve: But we're a US company, you know! And there's case law that code is speech, and that compelled speech is as equally protected against by the First Amendment as restrictions of speech are. Forcing us to ship you compromised software so we can implement a government warrant would violate our rights of free speech.
Alice: That may well be true, and I wish you luck with your legal arguments. But we're now arguing something completely different, aren't we? We started out discussing a cryptosystem — a technical measure — and now we're talking about law. In other words, you're no longer making any cryptographic claims of security at all.
Alice: It seems to me what's really going on here, is that you're using legal tricks to exempt yourself from processing warrants based on an extremely finely calibrated understanding of US law and ways around it. The involvement of cryptography is just a necessary component unto that end. But that understanding of US law is subject to change and evolve, and could be blown open by a single ruling. If that happened, there would be nothing technical preventing you from eavesdropping on everybody, because we already agreed your system provides no coherent model of cryptographic security. The cryptography in this picture is just a necessary technicality to your legal and public relations battle with the US government over surveillance.
Eve: ...That is a pretty accurate summary of what we really get out of this encryption system, yes.
Alice: End-to-end encrypted, you said?
Could web-based cryptography be fixed?
This is an interesting question. One idea I have sometimes considered is if subresource integrity (SRI) hashes could be applied to web service workers:
Service workers are a feature added to the web platform which allows a piece of persistent client-side JavaScript to potentially intercept and handle all requests to an origin.
Subresource integrity (SRI) allows a resource incorporated on a web page to have a cryptographic hash specified for it, such that it will only load if that hash matches.
Since service workers are persistent, the idea is that if you can pin a service worker script file as having a certain hash, it essentially becomes a root of trust for an origin. Unfortunately, service workers currently don't support SRI; I've proposed this feature here.
What this gains you in theory is the ability to permanently bind your website to a root of trust which could potentially be entirely different, and for example in a different jurisdiction, to that of the web server operator.
However, there are still a lot of issues with this approach:
This is a TOFU (trust on first use) security model, since it only takes effect once a user has visited a website for the first time. Thus this model can still be compromised if the website is already compromised at a user's first visit — or if they clear their cache, etc.
It is not obvious to a user how to verify the cryptographic root of trust. They would have to verify the SRI hash of the registered service worker and compare it to some known good value. This is rather awkward.
It relies on the assumption that all requests to an origin pass through a service worker without exception. This seems doubtful, though I haven't looked into it in detail.
In order to work properly, it necessarily also gives the website operator the power to accidentally brick their website in perpetuity. Thus this suffers from the same issues as HTTP Public Key Pinning (HPKE), which was supported by browsers but a time but eventually removed after it was found impractical.
The core of the issue here is that the web is fundamentally designed around the premise of client-side code being distributed by the communications service it relates to; that these two things are coupled together. This model is at once extremely powerful in terms of ease of deployment and portability, and has enabled incredible things; yet at the same time makes the implementation of meaningful client-side cryptosystems on the web platform impossible. Because this model is so deeply ingrained in the nature of the web, it seems like something that can't be easily fixed. It would require a substantial paradigm shift regarding the web's idea of coupling these two things, because a secure cryptosystem inherently requires that these two things not be coupled.
A cryptosystem is incoherent if its implementation is distributed by the same entity which it purports to secure against.