-
Notifications
You must be signed in to change notification settings - Fork 330
Require [SecureContext] for using WebGPU. #1363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I didn't put it everywhere, just enough to prevent using it outside a SecureContext. (~https) |
Thanks! @annevk See Jeff's comment above. Should this go on all the interfaces? |
Yeah, it should go on all interfaces. Otherwise the interfaces will be exposed without a secure context (e.g., |
Can you describe the rational for exposing WebGPU only in secure contexts? It doesn't seem obvious why we need this. |
We generally don't expose new web APIs not on SecureContext anymore. My understanding is that this prevents usage on http (requiring https), which is both an anti-middleman-abuse and also one of the levers to force people towards https. (though the latter is pretty well along) There are a bunch of explainer diagrams here: |
I can put it on all interfaces but the child interfaces are not useful without access to the parts I've put behind SecureContext already. Consider this a proof-of-concept. |
We discussed this internally and agree that we should gate WebGPU behind |
Quoting from @othermaciej:
So I think it has to be shown that WebGPU introduces significant new security or privacy risks, beyond high precision timers and fingerprinting surface. |
Reducing fingerprinting surface to known actors seems like an improvement to me. (And Tor is a browser that might want to prompt for functionality like this, FWIW.) |
WebGPU may also gain functionality that could results in prompting the user, for example to gain access to more detailed information about the hardware (to better take advantage of all its capabilities). |
As a security reviewer for this API on the Chrome side of things, I also think that this should be restricted to secure contexts barring a solid argument for why it needs to be exposed to non-secure contexts. In particular:
Combined, for compatibility and security, it seems much simpler to err on the side of security and restrict this to secure contexts only. We (Chrome Security) are discussing aligning our written policy to align more closely with Mozilla's policy. In particular, I think given the current state of the web it would be better to default to restricting to secure contexts and require arguments for why features should be exposed to non-secure contexts, rather than the other way around. |
Thank you Chris! This is a really useful and clearly-written rationale. |
@litherum so this is still a requirement for us. Do you feel strongly to not have it? |
Let me consult with my team. |
tl;dr: We are mildly against this proposal, but we won't block it. Responses
This is the opposite of our philosophy. By default, features should be delivered to all of our users, on all webpages, unless there is a reason they can't or shouldn't be. The burden of proof is to show that a proposal should be
The word "risk" here is certainly interesting. Does WebGPU put the user at risk of having their computer explode? Almost certainly not. Are they at risk of their kernel panicking? If we've done our jobs right, no. Are they at risk of having their personal data being deleted, or losing integrity? Again, no. Are they at risk of fingerprinting, or losing confidentiality? Mildly. (I'm also on record for pursuing removing the device name, which I still haven't opened an issue for.)
They are at risk of running computation they didn't expect to run - computation which can't directly access any part of the page, or even any other web API. But any non-SecureContext page is already at risk of running computation the user didn't expect it to run. There is no computation you can describe in WebGPU that you can't also describe in Javascript. Similarly, WebGL doesn't require SecureContext either.
You're right - increased processing power does increase the incentives to use this processing power. On the other hand, imagine we come up with with an optimization in our Javascript engine tomorrow which makes it an order of magnitude faster to run Javascript. Would we only enable such an optimization on SecureContext pages? Of course not. What if we wait many years for CPUs to become as fast as GPUs on data parallel algorithms - would we forbid users on these machines from visiting any non-SecureContext pages? Of course not. We're much more interested in more sophisticated defenses that try to identify objectionable computation.
We can always revisit this if/when such functionality is added. If we do decide to go this route, any non-SecureContext page would just behave as if the prompt was rejected. ConclusionSo, we find ourselves at a classic cost/benefit tradeoff. The cost of not requiring SecureContext is a somewhat increased chance of users running unexpected computation, coupled with a somewhat marginal increase in potential fingerprintability. The benefit of not requiring SecureContext is a somewhat likely increased adoption and use, because it works in more places, coupled with the benefit of not causing a bunch of Javascript authors' headaches from them trying to reconfigure some server they may not even have access to in order to use their new favorite feature. This is a case where we believe the benefits of not requiring SecureContext outweigh the costs. However, we do understand that neither side of this breakdown is super duper compelling - the arguments on both sides are fairly weak. So, our opposition is mild, and we won't block this proposal. |
Thank you for writing this detailed analysis, Myles!
From this point of view, all of the APIs are secure and should not need the secure context, if I understand correctly. Because everything is secure if we've done our jobs right. Just like all the software in the world is working if it's written properly. But we should consider the risks of some bits here and there not being perfect. In this sense, WebGPU certainly poses more risk to all of the following: computer crashes, kernel panics, integrity loss, etc. We are dealing with sensitive parts of the operating systems, drivers, and hardware here.
Fingerprinting based on the real device characteristics, like execution scheduling and timing, still carries a lot of risks in fingerprinting. Some of the risks are known, some are yet to be discovered.
Thank you for wiliness to compromise! Mozilla still requires it for new APIs, and Google agrees with this approach. Microsoft also expressed agreement on the WebCodecs discussion on similar matter - w3c/webcodecs#350 . Also, we can continue discussing this and relax after MVP. |
I updated the changes and enabled |
I only just saw this, and I'm a bit surprised as it creates a major web compatibility hurdle: now if you write something like a 3D graphics library and need it to run everywhere (both secure & insecure contexts) - as library/middleware authors don't control where or how the end developer publishes - then you have to implement both WebGPU and WebGL for compatibility with insecure contexts, which is a lot more work. In fact this may backfire: to avoid doing extra work to support this, developers may simply only support WebGL as it works everywhere, and hence adoption of WebGPU is slowed down. It will probably also significantly extend the usage of WebGL in future as it will stick around as a compatibility option for insecure contexts, limiting the ability for WebGPU to completely replace WebGL. I think a better alternative is to support WebGPU on insecure contexts, but remove or limit potentially sensitive features and information such as hardware details. Has this been considered? |
@AshleyScirra from Mozilla's point of view, the Web is moving forward with HTTPS - https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/ |
Which platforms aren't going to support WebGPU? I thought it was designed to cover Windows, macOS, Linux, iOS and Android, which is all the major platforms. I was hoping we could ultimately one day drop WebGL in favour of WebGPU. Otherwise we'll have to do things like maintain our shader library in both WGSL and GLSL forms (which also applies to third-party developers writing their own shaders for our engine), fix bugs specific to WebGL, and so on, indefinitely. For example we originally supported both canvas2d and WebGL, which was painful; after several years we were able to drop canvas2d and rely solely on WebGL, which made development much easier and made new features more viable. It will be a shame if there's no view to do that with WebGPU even years down the line, either due to missing platform support, or decisions like not supporting insecure contexts, which as middleware developers we don't have control over and will end up having to support anyway. (I do agree that HTTPS is great and the right direction for the web, I'm just being pragmatic here.) |
WebGPU exposes richer API to talk to GPUs than WebGL. It requires compute shaders, for example, which OpenGL ES 3.0 devices do not support. On Android and Linux, we require Vulkan support, which cuts away half of the devices. On Windows, we require either Vulkan or DX12, which cuts away older devices that are running WebGL perfectly fine. On macOS, we require something like 10.12. See #1069 for more info. Of course I'd also love to see WebGPU totally replacing WebGL. But if I had to choose between a severely restricted WebGPU that replaces WebGL, and a modern WebGPU that doesn't, I'd pick the latter. After all, we already have WebGL and it isn't going anywhere yet. |
It doesn't look like the alternative of limiting WebGPU on insecure contexts has really been considered in much detail. Perhaps it would be straightforward to do in a way which satisfies the privacy/security concerns. I'm not sure it's fair to jump to the conclusion that it would be a severe restriction. It's us web developers who bear the brunt of API compatibility complications, not browser developers or spec authors. All I'm asking is please think it through as an alternative. If you analyse the details and conclude it's inappropriate, I'd at least feel a bit better about dealing with the ensuing years of development headaches. What worries me though is seeing spec authors appear to ignore web compatibility as a concern, or just assume it will work out fine when it isn't clear that's the case. In particular I'm trying to avoid a world where several years from now, WebGPU support is ubiquitous, but we're forced to support WebGL solely for compatibility with insecure contexts that many of our customers, quite possibly unwisely, still rely on. And then we may have a much harder time adding exciting new WebGPU-based features as we still have to think about what happens in WebGL. That represents a drag on web development, and makes it harder to innovate, solely because of a decision here about requiring a secure context. |
Right now we want to finalize the API and to ship it. It's a priority for the next year. |
Here what developers need to change:
Seems not so hard as WebGL->WebGPU migration and |
localhost is considered a secure context, so you don't need https for local development (unless accessing over a local network rather than using port forwarding for a mobile device). |
I'm aware of all of this, it ought to be easy and everyone should do it etc., but when you reach Internet scale you end up with entire markets of people like hobbyists just learning the ropes, organisations that are too bureaucratic/clueless to move to HTTPS, and so on. If you work in an area where literally 100% of your customers are HTTPS, fine. If you work in an area where 95% of customers are HTTPS and 5% aren't, and it's still significant absolute numbers or includes important customers, then you still have to think about it. I hope I'm proved wrong and we get to exactly 100% adoption of HTTPS for everyone on earth, even tinkerers. But unfortunately I'm sceptical. |
WPT uses the filname `*.https.html` to determine to run the test over an HTTPS connection. This is now required because WebGPU requires [SecureContext]: gpuweb/gpuweb#1363
WPT uses the filname `*.https.html` to determine to run the test over an HTTPS connection. This is now required because WebGPU requires [SecureContext]: gpuweb/gpuweb#1363
* Use .https.html files WPT uses the filname `*.https.html` to determine to run the test over an HTTPS connection. This is now required because WebGPU requires [SecureContext]: gpuweb/gpuweb#1363 * Add tests for [SecureContext]
For documentation (I figure people might end up here if they run into this restriction): in Chromium you can bypass the SecureContext restriction for specific origins using |
This also makes a few changes to match the upstream IDL more closely: - Change [Exposed=Worker] to [Exposed=DedicatedWorker]. It probably didn't work in non-dedicated workers anyway, but no one should be relying on it because that's out-of-spec. - Removed [RuntimeEnabled] from `interface mixin`s where they have no effect as no non-RuntimeEnabled interfaces include them; see docs: https://chromium.googlesource.com/chromium/src/+/HEAD/third_party/blink/renderer/bindings/IDLExtendedAttributes.md#interface-mixins Adding [SecureContext] is a non-breaking change (with no deprecation period) because WebGPU is currently only generally available behind an Origin Trial that is is only available on secure contexts anyway. However, this will still result in "breakage" of non-HTTPS sites that currently require users to specify --enable-unsafe-webgpu, as well as development workflows that use a local URL other than `localhost`, e.g. on a LAN, as with all other APIs that require [SecureContext]. Developers can pass --unsafely-treat-insecure-origin-as-secure= or use chrome://flags/#unsafely-treat-insecure-origin-as-secure to bypass this. Spec change: gpuweb/gpuweb#1363 Fixed: 1243994 Change-Id: I5e1d22dc8cb57ec0076654738e7307ca54784488 Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/3247888 Auto-Submit: Kai Ninomiya <[email protected]> Reviewed-by: Brandon Jones <[email protected]> Commit-Queue: Kai Ninomiya <[email protected]> Cr-Commit-Position: refs/heads/main@{#961264}
This CL adds unimplemented stubs for the `fwidth` builtin. Issue: gpuweb#1255
Preview | Diff