Skip to content

WASM multi-threading ability #476

@kvark

Description

@kvark

This issue is closely related to #354 but approached from the WASM angle.
Edit: this is an investigation by someone who doesn't have a lot of JS/WASM experience, please take with the grain of salt, and provide your corrections.

Problem statement

In JS, in order to pass a serializable object from one worker to another, one would postMessage() in the sender and then receiveMessage in the receiver. The message will be added to the queue of the receiving worker and processed after it finishes processing the current frame as well as all other queued messages.

This model may be sufficient in (and is natural to) JS applications. In programs compiled for the Web via WASM, however, once there is a value on one thread, any other thread can access it. This implicit sharing is supported by Vulkan, D3D12, Metal, and is generally what our future users coming from the native development would expect.

The problem is - there is no place/hook to insert the message JS glue in this case.

Use cases

One of the use cases would be having a "streaming" thread that loads in some level resources, creates WebGPU objects from them (buffers, textures, individual mipmap levels, etc), which are then used by the rendering thread as soon as they come.

Another, more general example, is having multiple threads processing some sort of a render graph and building different display lists: one for shadow rendering, one for the main screen, there is a room to construct render bundles for anything, etc.

Solution proposals

Asynchronous API

One option is to force the users to be aware of the JS workers event loops and do all the asynchronous message passing the same way it's done in JS. This is least convenient option and may require architectural re-design of the client software.

Synchronous receive

If there was a way to receive a message synchronously (without waiting for the end of the stack frame), we could have some sort of a synchronous native API that handles the transition, e.g.

// on the producer thread:
auto buffer = device.createBuffer(...);
auto sharedBuffer = wgpuShare(buffer, SOME_THREAD_ID); // the JS glue would `postMessage` here
// on another thread identified by SOME_THREAD_ID:
auto buffer = wgpuAccess(sharedBuffer); // the JS glue would use some way of synchronous message receiving

Shared identifier tables

The idea is to essentially represent WebGPU objects as "IDs" that are just numbers and therefore can be copied around and/or used on different threads. In order to actually access an object, the glue code would then have to access some sort of a shared (between threads/workers) table, using that index.

One approach to implement this table would be using SharedArrayBuffer, since this object is already sharable in JS. The glue code would then:

  • on creation, put the object into the shared table, return the index
  • on every access, resolve the object using the shared table

From the user perspective, all the objects become instantly available on other threads. This comes at the cost of resolving the index on accessing each and every object from WASM.

WASM Tables

There is an emerging WASM construct Table that can make the approach of shared tables to be more efficient: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WebAssembly/Table

It's currently limited to references only, but the WASM WG would be open to expand it to generic objects. If we decide to go this path, I'm told that WASM WG can prioritize the development of Table to suit our needs better.

The benefit of using Table versus SharedArrayBuffer is having less round-trips to JS land, since Table is going to be natively supported by WASM.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions