Skip to content

Latest commit

 

History

History
2618 lines (1402 loc) · 151 KB

CG-06.md

File metadata and controls

2618 lines (1402 loc) · 151 KB

WebAssembly logo

Table of Contents

Agenda for the June meeting of WebAssembly's Community Group

  • Host: Igalia, A Coruña, Spain
  • Dates: Wednesday-Thursday, June 12-13, 2019
  • Times:
    • Wednesday - 9:30am - 5:00pm
    • Thursday - 9:30am - 5:00pm
  • Video Meeting:
    • Using zoom.us, the same link as the regular CG meeting
    • contact [email protected] if you need a link
  • Location:
    • Bugallal Marchesi, 22, 1º
    • 15008 A Coruña
    • Galicia (Spain)
    • Tel.: +34 981 913991
  • Wifi: TBD
  • Dinner on Wednesday:
    • 6:00 PM - 8:00 PM: Pre-dinner bus tour of A Coruña, pick-up from the Igalia office (optional!)
    • 8:30 PM: Dinner sponsored by Igalia at Artabria
      • C/ Fernando Macías, 28 bajo

        15003 A Coruña

        (Google Maps)

  • Contact:
  • Code of conduct:

Registration

Registration form

Logistics

  • A Coruña airport (LCG) is connected to Madrid, Barcelona, Lisbon, London/Heathrow.

    • Taxi costs ~20€ to the center.
    • The Airport-City Center Bus line (Line 4051 Centro da cidade - Aeroporto) last stop is the Bus Station (in Spanish: Estación de autobuses).
  • Other airports that could be used to get to Coruña:

    • Santiago de Compostela (SCQ) is about 50km from Coruña, still a good option if you have a direct flight from Amsterdam, Brussels, Dublin, Zurich, Frankfurt, Geneva, Milán, París or Rome. Trains from Santiago train station to A Coruña last ~30 minutes.
    • Porto Airport (OPO) is well conected with more international airports but it is 3h by car from A Coruña or a longer bus/train trip. It could be a potential option if you have a direct flight from North America (NY, Toronto, Montreal) or Brazil (Rio / Sao Paulo).

Hotels

These are a few suggestions for accommodation including the distances to Igalia HQ and their average daily prices for a double room on the dates of the event:

  • Hotel Riazor (First class. 10 minute by car, on the city beach, ~90€) - Web site
  • Hotel Avenida (Budget. 5 minute walk to Igalia, ~50€) - Web site
  • Hotel TRYP Coruña (Budget. 5 minute by car, ~60€) - Web site
  • Hotel Meliá María Pita (First class. 10 minute by car, on the city beach, ~100€) - Web site

Agenda items

Schedule constraints

None

Dates and locations of future meetings

Dates Location Host
TBD TBD TBD

Meeting notes

Opening, welcome and roll call

  • Adam Foltzer, Fastly
  • Adam Klein, Google
  • Alex Beregszaszi, Ewasm/Ethereum Foundation
  • Andreas Rossberg, Dfinity
  • Andy Wingo, Igalia
  • Arun Purushan, Intel
  • Ben Smith, Google
  • Ben Titzer, Google
  • Bill Budge, Google [remote]
  • Conrad Watt, Cambridge University
  • Dan Gohman, Mozilla
  • Daniel Ehrenberg, Igalia
  • Deepti Grandluri, Google
  • Derek Schuff, Google
  • Francis McCabe, Google [remote]
  • Heejin Ahn, Google
  • Istvan Szmozsanszky (Flaki), Mozilla [remote]
  • Ivan Enderlin, Wasmer [remote]
  • Jacob Gravelle, Google
  • Jake Lang, Ewasm/Ethereum Foundation
  • Johann Schleier-Smith, UC Berkeley
  • Johnnie L Birch Jr, Intel
  • Josh Triplett, Intel, Rust
  • Keith Miller, Apple
  • Kevin Cheung, Autodesk
  • Lars T Hansen, Mozilla
  • Lilit Darbinyan, Bloomberg
  • Lin Clark, Mozilla
  • Luke Imhoff, DockYard, Erlang VM
  • Luke Wagner, Mozilla
  • Mark McCaskey, Wasmer
  • Michael Holman, Microsoft [remote]
  • Michael Starzinger, Google
  • Miguel Camba, Dockyard
  • Mingqiu Sun, Intel
  • Ms2ger, Igalia
  • Nick Fitzgerald, Mozilla
  • Pat Hickey, Fastly
  • Paul Dworzanski, Ewasm/Ethereum Foundation
  • Ryan Levick, Microsoft
  • Sam Clegg, Google
  • Sergey Rubanov, fintech/blockchain
  • Stefan Junker, Red Hat
  • Sven Sauleau, Cloudflare
  • Syrus Akbary, Wasmer
  • Thomas Lively, Google
  • Till Schneidereit, Mozilla
  • Tyler McMullen, Fastly
  • Wouter van Oortmerssen, Google
  • Yuri Iozzelli, Leaning Technologies

Opening of the meeting

Introduction of attendees

Host facilities, local logistics, code of conduct

Find volunteers for note taking

Andy Wingo helping out taking notes

Adoption of the agenda

https://github.com/WebAssembly/meetings/blob/master/2019/CG-06.md

Proposals and discussions

SIMD (½-1 hr)

10:00

Presenter: Deepti Gandluri, Arun Purushan

Slides

DG: stagnated a little bit, let’s give status since last CG meeting. Voted on integer operations.

DG: We included integer operations. A goal is that the performance gains should be consistent across architectures.

DG: talking about what we’ve done since last time at CG meeting. A lot of started and prototypes. Benchmarks from v8 side, and various products. Arun will be sharing some. Pushing for stage 2.

DG: Updates: we have full toolchain support (LLVM support behind a flag). New instructions: FMA, load sign extension, static dynamic shuffles. Intrinsic header file (in progress). Implementations in V8, chakra, wavm.

DG: Benchmarking details: 3 different ARM set ups (2 ARMv8 and one ARMv7). We want to make sure there aren’t issues w/ various arm implementations. (Example: “shuffles are slow on ARM”.) Pixel 2 (snapdragon 835), ARM chromebook, jetson TX1. Benchmarks don’t have custom code -- just using LLVM auto-vectorization. Room for improvement. Tested on v8 prototype implementation, some operations are scalar, don’t have implementation of v128.const yet.

DG: Skia library. Skcms project, used by Chrome, Android, Google3 etc. Translates colors between color spaces. Lots of floating point arithmetic. Some V128 logical ops, bitselect ops, and a sequence of i16x8 ops. Speedups on Intel x64, Pixel 2, Chromebook, and Jetson TX1 respectively: 2.6, 3.2, 3.7, and 3.x.

DG: Halide. Programming language, makes it easier to write portable SIMD/threading for image and array processing. Many targets, currently using custom version. Open source checked in to Halide tree. Speedups on Intel x64, Pixel 2, Jetson TX1: 1.38, 1.11, 1.22.

DG: Some slowdown on Jetson TX1 for camera_pipe benchmark (0.82x). A lot of constants, and shuffles, we know ARM is slow for shuffles. I think there’s room for improvement here.

Johnnie: Did you look at native w/ and w/o SIMD?

DG: Haven’t looked at that. Don’t have native comparison, I’ll see if I can test that. Current proposal maps most directly to Intel SSE 4.1, but it’s not a common configuration.

DG: Speedups on nl_means and local_laplacian, around 2 for Intel and 1.2-1.4 on ARM targets.

DG: stencil_chain benchmark, mostly integer and arithmetic. Real-world benchmarks. 1.96, 1.37, 1.29. We didn’t put in effort to look at custom SIMD backends, it would give us a more realistic view, what performance gains we’d expect.

DG: Another internal use case, benchmarking neural net models, showing consistent speedups relative to baseline (1.2 to 2.5, mean around 1.9 or so).

Slides

AP: Performance numbers based on Chakra implementation. Microsoft surface device. Current ChakraCore implementation compliant with spec, modulo recent changes regarding shuffle. Generates SSE2 code, not fully optimized. Workloads: synthetic statistics kernel, glMatrix computation. Halide benchmarks in progress.

AP: Simple statistics workload: relative to scalar WebAssembly implementation, speedups 2x to 3.3x.

AP glMatrix 4x4 FP32 matrix tiles: speedups more scattered: 3.4x, 0.8x***, 2.32x, 1.1x. Seems the slowdown is because the implementation is suboptimal (many shuffles). Matrix inversion was not so much speedup because scalar does a good job already.

AP: Summary, floating point SIMD FP32 is used widely in graphics, ML. Chakra looking good, room for improvement.

AP: Next steps: update implementation to match spec, more benchmarks, Halide.

DG: Real-world use cases: Google Earth, Adobe Photoshop, Tensorflow.js, Zoom, OpenCV.js have all shown interest. Some are prototyping. Monthly SIMD meeting for interested parties; new parties welcome to join.

KM: Are the benchmarks for integer or floating-point?

DG: The benchmark suite varies; the Skia tests were mostly floating point, many others were too; some benchmarks were mostly integer.

KM: Are these benchmarks for sub-components of e.g. Halide, or the whole project?

DG: For halide, it was running everything. That’s why we ran into overhead of halide itself, two-layers of codegen, not sure which part is introducing slowdown. For skia it was smoketest, not sure if it is using everything.

Dan Gohman: Were these tests using the new FMA instruction?

DG: Some of the neural network benchmarks wanted to use FMA. Not yet able to share results yet though, will share in future.

KM: Do you know whether these benchmarks used all instructions, or some didn't come up?

DG: I think the floating-point, most of them were being used -- large percentage. Arithmetic instructions are used almost everywhere, conversions, truncations, saturations, for halide benchmarks. Integer benchmarks were mostly used in WebP benchmark, in combination it covers most. 64x2 instructions are not covered much -- we don’t have those implemented in v8 yet, we want to test those soon. Propose to remove those for now.

POLLS:

Remove I64x2, F64x2 types and operations from the current SIMD proposal, until there is performance data to justify their inclusion.

SF F N A SA
3 10 19 0 0

Decision, in favor of removal.

DGo: Tests are in v8, or in spec test format?

DG: In v8 for now, but spec tests are not required for this phase.

Phase 2

SF F N A SA
21 8 5 0 0

AR: proposal document requires full english spec text for entering phase 2. Do we have that?

DG: Yes.

AR: According to process doc, text should be not just an explainer but spec prose.

BS: In the past we have moved to phase 2 with enough information for implementation, and the design doc is sufficient for experimentation if not actual implementation.

AR: What have we done this with?

BS: Threads, bulk memory, sign extensions

DG: This is the next thing we'll work on, the formal spec text.

AR: Not how I interpret process doc. If we want that we should change doc.

JG: If our practice isn’t reflected in the phases, should we loosen the text requirement for phase 2?

DE: Making this a phase 3 requirement rather than phase 2 sounds fine. Otherwise we can record that we agree to phase 2 pending this text.

AK: What is the concern, given that the phase 2 doc explicitly excludes updates to the formal notation?

AR: The important bit is the text, which has a more precise style than a design doc or an explainer. OK with changing phase requirements.

DG: Can we relax phase 2 to be “we have a design, and are working on the spec text”,

JG: Motion to defer to the future.

BS: Perhaps there is a misunderstanding that “english spec” might be interpreted to mean a design doc.

AR: The important difference is between spec text and formal notation.

KM: Comment: concerned about halide regression.

DG: Yes I agree. Test is a bit flaky also. Will follow up.

KM: Appreciate DG bringing up the issue.

DE: Are we at phase 2 or do we need to revisit?

DG: Cheekily proposes phase 2 :)

BS: Assume phase 2 given past practice, but we can revisit later.

TS: Seems a bit unfair to be strict now without a prior discussion.

AR: Can address process in the future.

DG: Note that we made an effort to address other proposal aspects (IEEE floats, NaNs, subnormals, JS interaction); defined a lot that was pending to be specified. We definitely tried to do due diligence.

Conclusion

Move SIMD to phase 2 Rediscuss phase transitions in a future meeting

10 minute break

Reference Types (½-1 hr)

10:53

Andreas Rossberg presenting.

Slides

AR: Update on what is new.

AR: Recap: we add a new type, an opaque reference. A new form of value type, which can be used anywhere you would use a value. Locals, globals, parameters, results, can be put into tables, can be passed back and forth between wasm and the embedder. Interestingly, these values cannot be constructed by Wasm itself; for now it’s only for “foreign” values from the host.

AR: Actually it’s two new types: anyref, for values, and function references. To store these in tables, this means that a Wasm instance needs multiple tables. Always envisioned in the design.

AR: The recent changes in the proposal: incorporated some instructions from the bulk data proposal: table.fill, table.grow, table.size. Incorporated in spec, reference interpreter, and tests.

AR: Also new: added type-annotated select instruction and bottom type. One other thing this adds to webassembly is subtyping, as there’s a hierarchy of reference types, so this component addresses some problems we ran into (more later).

AR: Also new: finalised opcode assignments, and made decision to require forward declaration for uses of ref.func, for the purposes of streaming compilation (see issue #31).

AR: observation is that we should be clear up-front which functions are used in a first-class manner, for streaming compilation. Haven’t added that yet, because it depends on something in bulk-proposal.

AR: Big picture: value types in Wasm are the number types, and now the reference types. Reference types are opaque, storable in tables (as opposed to memories). Tables are the memories for reference types. Currently references are all typed as anyref, with one subtype: funcref.

AR: There are new instructions to support reference types: a constructor (ref.null), a predicate (ref.is_null), ref.func to make a funcref, and table.get / table.set for table access.

AR: Other instructions adopted from bulk-data proposal. Table.fill, table.size, table.grow. These moved here because table.fill needs a reference value. Table.grow needs an initialization value. We may have a reference type that is not nullable, so you have to provide a default value. Table.size is probably there just to correspond to table.grow.

AR: Also there are some pre-existing instructions that have been modified. Table.init and table.copy are from bulk data, which now take an immediate indicating the table index. Also call_indirect takes a table immediate; this was already envisioned in the 1.0 design.

AR: more ugly change is the new variant of select instruction, it is type-annotated. When you have data flow where you use the same value several times, or where one value comes from different sources (joins + splits), if you don’t have enough type annotations you may have to compute principle type. Least-upper-bound or greatest-lower-bound.

AR: The result type of select is the least-upper-bound (“lub”) of the operand types. Also br_table the operand type is the greatest lower bound (“glb”) of the label types. These are the only instructions that need to be adapted. Glad that we introduced block types; otherwise we would have more problems here.

AR: For select, (example in slides).

AR: The problem with computing the lubs and glbs is that it can become arbitrarily complicated. How to simplify?

AR: Avoiding lubs, we make the previous example invalid. We require type annotations. Select doesn’t have type-annotation, so we add an optional type annotation (select <valtype>). Works for all value types, and could be generalized to multi-value.

AR: The existing select continues as it is. Implication is that it only allows numeric types.

CW: Does this have interaction w/ typing of dead code?

AR: For select, no, but for br_table, the problem is related to unreachable code.

CW: If you have a select after unconditional br, after the select, you have to assume…

AR: Going to get to that. Another change pending to introduce a bottom type. For backwards compat we have to allow select on numeric types. For more general select, you need a type annotation.

AR:

select <t> : [ <t> <t> i32 ] -> [ <t> ]
select : [ <t> <t> i32 ] -> [ <t> ]  iff  <t> <: <numtype>

AR: For br_table, we have two disjoint supertypes of one given subtype. (See example from slides). In this example, it type-checks if we assume the type to be $B. To validate that, we have to check that $B exists -- $B is the glb of $A1 and $A2. Checking existence means we have to compute it, which may be expensive.

AR: current example where we reject w/ f32 and f64, if we don’t change semantics we’ll still have to reject.

AR: To avoid computing glbs, we introduce a bottom type. In fact this already exists in implementations: it’s used for stack slots in unreachable code. But right now it’s just a device of the algorithm, not a proper type. Now we just make it a proper type. Doing this avoids needing a change in anything but br_table validation. Just a change in the algorithm for typing br_table. This change makes it possible to allow the counter-example.

AR: To complete the picture, value types are number types or reference types, but all types are supertypes of the bottom type.

AR: Doesn’t change much in implementations, just changes br_table typing slightly. Have been implemented in prose, formal, interpreter, tests, JS-API.

AR: Proposal already at stage 3, no stage change proposed. Needed feature of declaring ref.func up-front, blocked on resolving segment format, but should be easy.

AR: Implementation status: V8 done, SpiderMonkey in progress.

KM: WebKit progress as well. We may only being missing funcref in tables.

AR: Note, this also affects JS API, as we can export functions that take an anyref. Note, funcref is nullable, so can pass null. Problem to pass JS functions as funcref because they have no type, like for Table#set; problem can be fixed by proposed JS reflection API.

AR: Currently can provide type for function by passing it through import parameter to module instantiation, already do that with wasm-c-api.

AR: Table#grow takes optional init value param, also.

JG: Does “any JS value” for anyref include numbers?

AR: Yes

JG: You can still have a conversion from js number to i32?

BT: How does the optional parameter work?

AR: Defaults to null, which makes it backwards-compatible.

BT: In bytecode and encoding?

AR: Only optional in API.

WVO: First of all, why the two different select instructions instead of upcast or something.

AR: We have the existing select, but it doesn’t work with subtyping.

WVO: You could keep select as not accepting...

AR: That’s what you do in languages with broken type systems :) Not stable under certain program transformations. E.g. when one branch is a local.get, you do constant propagation and replace with value, it might become ill-typed. Not ideal.

WVO: fair enough. Are we going to introduce non-nullable reference types later?

AR: Anyref will always be nullable. Next presentation elaborates on this issue. Funcref always nullable, because it already is -- but we’ll also add non-nullable types later.

HA: What would be the values for locals for the nullable types on the stack.

AR: No default value. You can’t use locals with non-nullable references types in the same way.

HA: I remember you mention -- you are planning to remove nullref type at some point…?

AR: That is actually an internal type, you can’t write it down. If you meant eqref, that is no longer part of the proposal as it is not particularly useful in the context of this proposal, though it will come back in a future proposal.

HA: Mostly wondering about -- exception handling is dependent on reference types. We rely on the ability to init as nullref in locals, we have to have locals with that type. Is that going away?

AR: In the followup with non-nullable reference types, we will have a new way to make locals that doesn’t require default values. E.g. somehow defining a tighter scope, where you declare the binding when you have the value.

HA: Does that apply to any reference type?

AR: You can do that with any type, but it’s particularly necessary for locals with non-nullable reference types.

DS: For exceptions specifically, we can define that exceptions have a null reference type.

AR: For exceptions, sure. I see that it could go either way. Generally preferable to make something non-nullable.

HA: I was not planning to talk about exceptions -- just wondering if restriction applies to all possible subtypes of anyref, or we can make exceptions for that.

AR: There’s anyref, and there’s subtypes that include null, and some that do not.

JT: With typed imports, if that’s built on top of this, is the expectation that import types can be non-nullable?

AR: For typed imports, they are bounded quantified types, with subtyping bounds, so that things that you pass must be subtypes of that type -- if you have a bound that’s non-nullable you can only instantiate with a non-null value.

JT: If you were to import a type as non-nullable, can you rely on the validator to ensure that it will not be null?

AR: You can rely on it. It’s a sound type system.

JT: So if you have a funcref where one of the param types is a non-nullable reference, you can safely omit the null check within that function?

AR: That’s the motivation, yes.

AR: Given that both JSC and SM are far along, I assume that at some future meeting we should be able to push to stage 4.

11:30

Andreas Rossberg presenting.

Slides

AR: as background, we split reference types out from GC proposal. Same is true for this proposal, we cut off the GC proposal because it’s useful independently. Refining reference types to something typed.

AR: Basic idea presented at an online meeting but this is the first F2F presentation. Motivation is that currently, indirect function calls go through a table. Null check and type check. Need more efficient indirect calls without runtime checks. Want way to represent function pointers / references directly without tables.

AR: Need efficient interop with host when passing functions back and forth with host. Another part of the proposal which could be split out if controversial is opaque closures.

AR: The summary of the proposal is that it’s based on reference types. Refines funcref type to fully typed notion of reference types. Introduce new subtyping hierarchy below funcref. Distinction between nullable and non-nullable.

AR: The second proposal component is to define func.ref operator to no longer return funcref but a more specific subtype. Compatible change.

AR: And then the more interesting part, we add call_ref which takes a reference and performs a call on it without a runtime-check. If it’s nullable then it only requires a null check, no type check.

AR: For closures, this proposal adds a func.bind instruction for partial application of arguments.

AR: Recent changes: summarized details of extension in design doc. Incorporated nullable optref type. Started work on reference interpreter. (Aside: this triggered the lub/glb problem referenced in the previous presentation.)

AR: (Some examples in the code)

BT: I assume you’d also have tail call ref?

AR: Yes: the new instructions are ref.func, call_ref, return_call_ref (like call_ref but for tail calls). See types in slides.

AR: Also introduces optional references. Previous types where not nullable. Compiler can assume they are not null. Sometimes you want to include null, without going to funcref which is far-less precise. We add a new type optref, which is inhabited by null.

AR: Some instructions for checking and converting between ref and optref types: ref.is_null, ref.as_non_null (downcast from optional reference to “proper” reference, trapping on null), br_on_null (for handing null).

KM: Seems like you don’t want br_on_null, you want the other one. You don’t need all of them.

JT: The other difference is that you can generate better code here.

KM: Most places that have non-null are probably going to check for null again...

AR: Point is that you get the ref, you don’t need to check in the future.

KM: It might be different basic blocks that need to do that check...

AR: The producer would have to hoist it properly so that it doesn't repeat it.

KM: You would do is_null to check

AR: You have to rely on the engine to not repeat the check. I think it makes sense to have both.

JT: Do you expect as_non_null to be used often enough that it makes sense to have a dedicated encoding, rather than expecting people to use br_on_null? I can see use cases, but I don't know whether it's needed separately.

AR: Producer may have to produce optref somewhere, but knows it can’t be a null, so it can create a downcast that it knows will never fail. More compact.

BT: For br_on_null, did you consider a version of if with two branches?

AR: Block instructions tend to be 10x more complex, so it’s not more desirable than this.

BT: You can accomplish the same thing, you just need an if.

AR: In the exception proposal, we also have br_on_exn. You could turn these into block instructions, but then you'd get a proliferation of block instructions, and I think it'd be good to avoid that; it'd add complexity to the language.

BT: We already have the machinery for if. It could be more useful for producers. We did a study of this earlier, and it saved a couple percent for code space.

KM: Most people want to br_on_null [rather than as_non_null]; the branches are early exits.

AR: And they will all branch to the same label...

BT: I see that but perhaps we could study the patterns and see with data.

KM, AR: sure.

AR: Final subtyping hierarchy. At the bottom, the whole family of non-null ref types: (ref $t). That’s a subtype of (optref $t), which is a subtype of funcref, which is a subtype of anyref.

AR: Now that we have optional references, you could require everyone to do null checks, but we can make something simpler. We can generalize call_ref to take optref.

LH: Optref is a kind of funcref?

AR: Yes, at least for this proposal.

LH: Curious about w/ GC proposal.

AR: No longer true for GC proposal, there will be other types on the side. Not a rule that is directly built in, falls out from the rules.

AR: For call_ref, if the compiler knows that the callee is actually a ref, then it can avoid the null check at the call site. Similarly for the tail call version.

AR: Other part of proposal, could be split off. Closures. Motivation here; we can express closures in wasm already, by putting it in memory. But not interoperable. This makes closure types something different than other types, can’t pass as reference for example.

AR: More seriously, it’s also not opaque. You expose the closure data to the callee and do not have any information hiding. In fact it may require exposing the entire memory. Rolling your own closure works but only within a hermetic unit.

AR: Wasm functions are in fact closures already. They close over the instance in which they live. From an implementation perspective we could just extend internal implementation details to the language.

AR: We would add one instruction, func.bind, partial application. It takes a function reference, yields a fresh one with some arguments bound.

AR: Note, you cannot construct cyclic references between closures because there is no mutability. Reference counting sufficient -- this doesn’t lead to full GC.

AR: (example in slides)

BT: Do you have to bind arguments left-to-right? E.g Scala allows for arbitrary partial application.

AR: I’m not sure that’s worth it… we could discuss it, it makes a more complex surface. My experience is that you usually have this case, you order the arguments so this works out. If it doesn’t, you write a wrapper function yourself.

BT: OK.

??: Implicitly closed over linear memory at the declaration point; is there a way to change that?

AR: No, why should there be a way to do that? You can't swap out the instance under the feet of the function.

LW: A module could have multiple imports, you could use someone else’s memory, so you may see that in the future.

AR: If you want to be able to access other memories, there are other mechanisms. The code of the function hard-codes many assumptions about its instance; in general that’s not a meaningful or safe transformation.

AR: So the instruction now, this is repeating call_ref instruction. And now we show the func.bind (description in slides). Didn’t have time to make a slide for binding non-nullable things.

AR: There’s one other instruction in the proposal that didn’t make the slides. It’s like block, but which allows you to bind locals. The initialization values for the locals are taken from the operand stack; like a “let” expression. The locals there are only scoped within the block. Slide missing though.

AR: What do people think about including closures in wasm?

BT: I’m for, but can we take a poll.

LI: I’m writing Rust, how would tools interpret to use these closures...

AR: That’s an excellent question; no idea.

DE: I’m surprised at how opinionated these closures are. Seems like you can build a lot with them; I like the general direction.

DG: Not sure how to use this in LLVM right now.

DS: Not only that, we don’t know how to implement anyref / reference types.

KM: Any other languages that may want this other than LLVM?

AR: Any higher-level language that doesn’t necessarily compile through LLVM would want this.

JG: I think… not sure how we’d use this in LLVM, but perhaps in binaryen can fix it up. Not sure how we’d do that with closures. If you have a multi-tool toolchain, maybe you could incorporate it.

AR: In summary, we do not know about LLVM.

BS: Current phase?

AR: Stage 0. Perhaps this is a good time to propose advancing to phase 1.

AK: Any reason that closures are in this proposal? seem separable.

AR: Right, they are separable. They seemed to fit in this proposal. If there’s enough controversy they can be split out.

JT: I wouldn’t say controversy, but it seems that there’s consensus on funcref, closures need independent validation with other languages to see if it is the right pattern for other languages. Needs different evaluation, so maybe be separate.

JG: Absent a really strongly motivating use case, perhaps let’s avoid the issue -- we can always add it later.

JT: I would hate to see this held up.

AR: I think various compilers would benefit from this

JG: Are those compilers using wasm today?

AR: we have chicken and egg problem here.

BT: Andreas made the point, if you want to export a function, this gives you the ability to export a function [by closing over some arguments]. It’s not just convenience.

AR: Right, it’s important for APIs as well. If you have an API that takes a callback, then without this proposal you can’t pass a closure.

JT: That raises questions: what is the implementation going to look like, how much will it allocate?

AR: Yes, but those are things we have to find out.

BT: I can see a plausible story -- one allocation, won’t be more inefficient than that.

JG: I think the encapsulation consideration is an interesting one. Currently you have to expose all of your memory, so yes this is interesting.

JT: I would happily vote yea on both proposals, but rather see them separate.

WVO: This would expose your memory, that’s what’s said. If you use i32 as index, you could make it opaque so you wouldn’t have to expose that.

AR: Right, you don’t have to expose your memory in those cases, that’s true.

JG: Yes that’s ABI-dependent. You could pass your closure as a host-created reference type.

AR: If you … the problem is that it’s totally unsafe. You don’t know if the host gave you back a number pointing to a correct closure environment.

WVO: Yes but if you have some kind of callback mechanism, there’s some level of trust there.

BT: I would argue, giving languages the ability to implement the ABI that they can is valuable. If we refrain then those decisions may be baked in.

DGo: Question related to long-term APIs -- how does this compare to what you can do in a fully GC world? If you can make your own closure there, how does that relate to this proposal?

AR: Yeah, but if you do GC anyway, there’s no reason not to have this…

LW: even if you give me a struct type that has enough information, it doesn’t have enough information to be callable. It’s a struct instead...

DE: This makes sense -- considering we don’t want to bake things in, we don’t necessarily have to do this now, if we have GC then we don’t have to do this.

AR: I would actually argue that this is independent from the GC proposal. I don’t see any good reason once you have GC to pass things as GC structures. Even in that world, you want to pass closures.

DE: I don’t think this is subsumed by GC, but I don’t think we can say that we need to do it sooner. We’re going to be adding other features.

BT: Comment was that since we’re adding typed function references, closures go hand-in-hand with it. Not having closures and having typed function reference would tempt people to misuse API.

JG: What are the other moving parts in this space, like webidl bindings. If you can define API without baking in ABI, maybe WASI will do something similar. This may be the wrong level of abstraction to solve this. Consider whether Andreas said this was not separable, would we have been OK with it.

JT: At this point we’d be saying why isn’t it separable. But is this the right way to do closures?

BS: Perhaps because this is phase 0, we can defer the discussion to a later phase.

AR: Splitting proposals is conceptually easy but is overhead. Perhaps consider splitting it later if it turns out to be a problem.

AW: I think this proposal seems sufficient from the perspective of a scheme implementer. It handles many cases already (see “Implementing closures in O(0) time” paper).

AK: I raised the idea of separating them, and I voted "SF" because I agree with BS that the decision to separate could come later.

POLL Move to Phase 1, with the closures included

SF F N A SA
16 12 8 0 0

Decision: move forward to phase 1.

JS/web adjacent topics (1 hr)

Presenter: Daniel Ehrenberg

Slides

12h15.

DE: Interaction between wasm and JS. Layering is great; wasm is not just web and JS. We need ways to interact.

DE: Many proposal relate to JS and the web. Reference types, function references, exception handing, etc.

DE: I wanted to talk about some smaller proposals: wasm module integration, top-level await, get-originals, javascript built-in modules, import maps, weakrefs, atomic.waitasync, bigint/i64 integration.

ES module integration

DE: ES modules! Problem: how can wasm import JS modules or vice versa? There’s the JS API but it requires constructing these specific objects and doesn’t integrate with the module graph.

DE: Semantics are pretty simple. When wasm modules are used in this way (ESM module integration), wasm module imports are identified as ES module imports, the module specifiers (where you’re importing it from) are as identified as JS module specifiers (treated as URLS on the web).

DE: At run-time, the module is compiled during the ES parse phase, and then instantiated during the execution phase. Contrast to earlier idea about instantiating the module during the link phase, but this doesn’t allow importing and exporting tables and memory. That’s needed at instantiation-time. Relatedly, there were concerns about cycles; this proposal doesn’t address that.

DE: Also, some implementations today, it’s possible for wasm loading to yield to do work in parallel during module import.

DE: Written up with some formal spec text, and an explainer. One issue we’ve been discussing -- what about the start function? What if it has to access memory? If we want to import something that reads exports, and that wants to access memory?

DE: The way the current text is written, exports initialized after start function runs. Proposed solution: add special kind of start function (naming convention, custom section?) that will run after exports are initialized.

DE: There are also some ideas about special semantics when we want to instantiate multiple times… needs more time to determine whether that’s necessary.

DE: Status is that we’re at phase 2, the spec and explainer are there, and we’re trying to align ecosystem around single standard. JS bundlers excited about wasm, but need common implementation / behavior. Lots of interest in aligning around standards (node.js, webpack, rollup). We intend to write web platform tests but are missing the next piece...

AK: You said something about cycles…?

DE: So, there are well-defined semantics for cycles in ES modules, but it’s defined in terms of “temporal dead zones” on the ES side -- accessing value of binding will trap before it’s initialized.

DE: The way imports are handled -- logically you have import declarations in wasm. Each is read as variable, one of those won’t be set up yet if there’s a cycle.

DE: Cycles are supported between wasm and JS because in particular JS might not read wasm bindings during execution phase. Allowed.

AK: They are allowed then, got it.

DE: Yes, for some cases, not for others though. There was confusion on the issue tracker but I want to clarify that.

Top-level await

DE: Part of the infrastructure necessary: we may want other work to happen. There may be other modules instantiating, that may take time. We may need to yield to the “main thread” (only has meaning on web, but it’s important to handle this case).

DE: Answer: “top-level await” (TLA). In-progress proposal at TC39 (JS standards group). Allows users to import modules asynchronously.

DE: When you import a module that may do work that’s asynchronous, you can resume later. It’s useful even from JS. We reached stage 3 in TC39 last week. If a module needs to do work, it can do that and then yield.

DE: Status: stage 3 in TC39.

DE: With the promise-based API, we’d always queue a task to finish the instantiation -- this has been prototyped by Luke in SM, found to be web-compatible, as far as we can tell. So I’d like to move forward with merging that. We’ve discussed this earlier.

get-originals

DE: get-originals. We’re going to talk about webidl bindings, if you pass in an import object that has these things, it will help you perform conversions. But get-originals will allow you to avoid other glue code.

DE: Early proposal, just a README. Code examples on slides. Defines a set of built-in modules with “original” definitions of JS language facilities.

DE: Nice thing is that these import statements, you can do the same thing from wasm as well, if using ESM integration. You may want to reliably access the original versions of these functions. Even calling a function with arguments can be imported this way, together with the anyref proposal, it all fits together.

DE: This hooks into WebIDL bindings as well. With get-originals + WebIDL bindings + ES module support, then we can completely avoid JS glue code to access the DOM.

DE: get-originals status: it’s still early, been in discussion for a year. No spec or tests or implementation. Does this make sense for wasm?

JT: yes, please :)

JS built-in modules

DE: How can new APIs be exposed via modules? Problem is that currently new functionality is exposed as properties on globals. What if we instead made new functionality be available via built-in modules?

DE: ms2ger did a lot of work here, using kv-storage. Discussion of this in TC39, led by Apple. Under discussion currently.

Import maps

DE: Here we have a proposal that helps resolve module specifiers.

DE: When you import something, you import from a URL. The fetch parameters give it. If you want to import moment or lodash… this gives us a level of indirection which will be in place for ESM integration as well. You can have fallbacks, if a module is not present then fall back to another one.

DE: Origin trial in progress in Chrome. Seems to have broad interest from other browsers.

AW: what does this do for wasm?

DE: Like the get-originals spec, but extends functionality to allow web sites to fall back to polyfills downloaded from the web.

JT: I did a quick skim of the spec -- this shows a URL given as a fallback? Does this have other information like a hash or similar to determine whether this is what you expect?

DE: good Q. It doesn’t include this by design. The idea is that this is provided by a different manifest. I’d recommend following up with the issue tracker there.

KM; Yes, ideally you have this as part of your CSP policy or something.

JT: my concern would be -- I don’t want this to be harder to apply integrity protection to than JS.

AK: Currently there’s no way to do integrity on import declarations; we already have this problem.

JT: If you go back to the archaic way, you can. (AK: yes)

DE: Yes, discussion is ongoing since some time; let’s continue the discussion. Will retake in Japan at TPAC.

TS: One interesting aspect about import maps, it works the other way around. It allows you to use built-in modules that fallback from a URL from a server to builtin-module, so new browsers use one but old browsers use the polyfill.

Everybody is good and hungry and we break for lunch.

12:36: lunch

13h46: Afternoon session opens. Daniel Ehrenberg continues the Wasm/JS point.

DE: Retaking the ES module integration point, Daniel prefers a well-known start function name (“start”) rather than a custom section. We should arrive at a consensus before toolchains standardize on something in practice.

BT: You mention a name, there’s a function with “name” in the name section?

AR: No, not that. In the export section.

DE: It layers on top of the existing semantics. It’s not importable as a JS function (it is, but not easily).

DG: Start is already a name in wasm, there will be ambiguity when you call both of these functions start.

LW: While it’s not a valid identifier in wasm, “start” is valid from the JS API.

AK: It would be fine to use… the mike.

DE: I want to propose that we’re OK with some identifier to use here (and it needs to be visible from ESM integration)

DG: For wasi we’re working on a convention..

DE: Great, thanks for raising the points, we will talk offline.

Weak References

DE: How can we refer to linear memory without calling free().

DE: You might want to expose an object from wasm to JS and have it just work without the need to destroy the object. Answer: WeakRef and FinalizationGroup (TC39 proposal).

DE: WeakRef proposal handles this, you can hold it without keeping it alive, finalization groups give you a callback which will be called when it is collected.

DE: Finalizers are postmortem -- you don’t have a reference to the original object from the finalizer. Avoids problems from Java which receives the object itself (resurrection, fragile API design). Instead you get a separate object (“holder object”)

DE: On the web it ties into an event loop, after all promise actions run, then collection happens when JS starts up again.

DE: In order to accommodate wasm use cases, you have to collect between turns.

LH: Motivation is shared memory, and it’s true for JS as well. JS in a worker may not return to the event loop. It’s a bug in JS :)

DE: Current status is that weakrefs are at stage 3 in TC39 (since a week ago). Complete spec, in-progress tests, in-progress implementations. GC proposal still needed for cycles between wasm and JS, as is currently the case for the DOM and JS, but this gets us somewhere.

Atomics.waitAsync

DE: Wasm has a wait instruction - i32.atomic.wait, and it’s not allowed to be used on the main thread, because blocking on the main thread is not allowed. The idea to fix it is atomics.wait.async.

DE: Idea to fix is atomics.waitAsync, returns a promise. This is now at stage 2 in TC39. Curious if this is useful in wasm. You may be able to use it with import maps, get-originals, etc. Not sure.

LI: Could be useful for erlang implementation -- having this for webworker threads would be great.

DE: That’s interesting, you have many of them on the same thread, you wouldn’t want all of them to block -= this is great - would work for that.

LH: Doesn’t wasm need a story for promises? There’s no support for promises in wasm currently.

DE: Yes, I wonder if this could be part of WebIDL bindings.

TS: We have Web IDL bindings, and we have queue microtask..

DE: QueueMicrotask?

TS: Well, you could implement the rest in user-land in a straightforward way. But maybe having bindings to actual promises would be a sufficient answer

DE: Once we have typed function references - by passing in two function references, you can pass in the two handlers and WebIDL bindings would return one of them

BigInt / i64 integration

DE: So, how can we use i64 from JS? JS has no i64! We have been talking about this for a long time, but now in JS there is a standard way to have arbitrarily-large integers: bigint. The idea for wasm is to integrate it in a straightforward way as a mapping for the i64 type.

DE: We have a full specification and tests, and partial implementations thanks to Sven. I hope it can come fast enough that people don’t feel the need to change APIs.

LI: What happens when it was bigger than I64?

DE: It would wrap around

LI: This wouldn’t work, tried it in Erlang and they didn’t wrap, and it was non-intuitive. Rust also has functions to check for overflow.

JT: checked_add, checked_mul, that kind of thing?

DE: Can you do this in the JS glue code you generate?

DE: Motivation for wrapping is to match TypedArray semantics. When we added BigInt64Array we decided that modular arithmetic is the right thing, matching Int32Array.

LI: I thought you said the bigint has arbitrary size…?

DE: They are arbitrary size, in this context you have a big int and what you need is an i64, so we followed typed array conventions.

JG: Presumably if you have a bigint, there’s no way to pass that to wasm today without just passing the bytes to memory.

DE: Right, you need the reftypes proposal to be able to pass the bigint in by value.

JT: It seems like it would be useful to have a variant that does trap on overflow, maybe not by default.

LI: As an example we’re using checkedAdd, checkedMul to promote small integer type to bigint. We’d need something similar for incoming arguments to a wasm-compiled erlang API.

DE: I can see how it would be useful, we have to decide on one default system for consistency. These are already shipping on two browsers - I’ve gotten complaints in the other direction - i.e. make it not throw so I would like to have this be consistent

JG: Also my understanding is that it’s at the import boundary of a wasm module. There’s a small set of things you can do with a bigint at that point and still make sense.

DE: To clarify another case: when you export a function from wasm to JS, and that function takes a parameter that’s i64.

JG: What else would you do here, is what I mean. You can’t have a configurable semantic here.

DE: A place we could configure these further is when we get into webidl bindings, because those have configurable semantics. I’m hoping to defer more to webidl bindings -- you may have a function trampoline when cycles are present. We could perhaps do something like that with bigint i64 values. Does that seem like a good path?

LI: That’s fine

DE: Great. In summary, let’s work together between JS and wasm even though we have our own layerings.

(polite applause)

DE: Anything else?

LW: ESM integration?

DE: Let me speak to that. There are two emerging ideas w.r.t esm integration. 1. Secondary start function, people are generally positive with some minor issues about naming.

DE: The second one was about if you want to run a module multiple times -- we don’t have this notion in ES modules. I would like to follow up with Luke and others outside of this meeting. It can be a post-MVP thing; we have to figure out start now, but we can figure out multiple instantiations later. OK for you Luke?

LW: Sure.

LW: Let’s say you have a C function, it has a main function, you want to export that to JS and call it. You expect every time you call it that it will take an executable, run it, then when the function is done, then instance goes away.

LW: So what happens if I use the existing esm integration today? It’ll live until the end of the page/tab - it’s not the same as what happens on the commandline?

LW: I would expect the module to be created once, and reused. But the instance would be created from that.

LW: We can do that by using custom sections - the custom section has a name, at load time hold on to the module, and...

LW: And if I import that from JS it will just do what I expect. If I import from wasm it does what I expect.

LW: Imagine I have two wasm modules, they don’t share anything (something like a pipeline). Even then we want two instances. Every time you call it you make a new instance and memory.

LW: When the pipeline calls grep two times..

LW: So that’s the basic idea, not entirely fleshed out.

PH: What happens when the instance stores the list of strings in memory?

LW: So the question is when it returns the value, what’s the return value. WebIDL bindings provides a solution to that, right? But there are other possibilities, provide a view into memory. Maybe it’s call a callback that I passed in. But I think WebIDL bindings is the most elegant solution.

DGo: Grep would have to serialize itself (its output somewhere (?))

TL: This is also useful for modules that don’t explicitly have a main. Other modules that export functions may need to call constructors at startup as well.

LW: There’s this two different modules, concept of service, in ES module s it’s the concept of..

LW: Then there’s just a completely different category of module that just has one function, that we should destroy everything when the function is done.

DE: I think this is very different from the semantics of JS modules in general. If we find a solution for this, it won’t necessarily change the semantics of how it will work when linked into a JS module graph. If you don’t think this is the case, we can hold off on phase 3. I don’t think it will be different, can you clarify?

LW: Sure. Has to be a thing that the module declares: “I am a command module. To import me is to instantiate me every time you call me.” We can make this change compatibility (opt-in).

DE: So for example, if you have a module that has metadata that includes it for a different mode, it won’t have any arguments…?

LW: Because you’re exporting it, it can take explicit arguments. How does grep get its imports… Rather call it a function so you can pass explicit arguments rather than importing.

DE: So the core capability that this gives is that it lets you instantiate a new wasm module each time.

LW: Default is compatibility.

DE: This, unlike the start function seems like it should be post-MVP - is this an even split?

LW: I could see either way. It won’t break anything retroactively but it will be useful as soon as it’s available. Addresses some of Guy Bedford’s concerns regarding WASI/node integration -- the command module concept maps to an existing use case, better to support it directly.

JG: In theory we could do this with a JS module, this gets you out of the business of integrating a wasm module as an ES module.

DE: Ok, I just want to get a solid recommendation for my next steps -- should I try to work offline to see whether this needs to be in the first version?

LW: I don’t think people have enough time to process, but it’s definitely worth looking into… I don’t think there’s a backwards compatibility concern holding people back from shipping. We almost want to change what toolchains emit.

JG: Which is a way of saying, this is probably a post-MVP feature.

LW: Also valuable right now so I wouldn’t wait either.

DE: I guess we’ll keep discussing this.

AK: Is there a write-up somewhere?

LW: Far too early.

DE: I really think we should conclude this should be a post-MVP feature.

LW: The order of things getting standardized, and the order of things getting implemented, etc. Should we write up an explainer, have toolchains emit this, polyfills, etc. that stuff doesn’t block what’s in the document. Whether this goes into a spec we standardize, that doesn’t really matter as long as we are still discussing it.

AW: Which MVP precisely are we talking about?

DE: Poor choice of words for me. The current phase 2 ESM module proposal. I want to separate this out, because it still needs time to develop. We want to align current bundlers. If we add more stuff, then we’ll delay integration.

LW: I see your motivation, that makes sense.

DE: I don’t want to further delay work on it, I want to continue to work on it. Just want to decouple this component from ES modules / wasm integration proposal.

WebIDL bindings (1-2 hrs)

14:25

Luke Wagner presents.

Slides

LW: This is a WebIDL bindings update, folks working together from Google and Mozilla, joint presentation. Short summary, I will present prototype work and use cases. Based on this is being used, will propose a spec refactor.

LW: Since last time, at TPAC, recall that there was a Host Bindings proposal -- for some generic host. Since then we refocused on web APIs, defined in WebIDL, to make things more concrete. Result was a rewritten explainer, rename to WebIDL bindings, and new design.

LW: We proceeded to iterate of design, and prototypes, Quick summary - in the wasm mvp you have some wasm module, when the export if an import of some other function, in a spec sense there’s no specs involved between those two modules - the generated code can call_indirect between the two modules and it is efficient.

LW: Same thing when your exports are called by another wasm module. But what about JS function?

LW: First thing is you have to pass JSValues. Those JS values are like raw numbers, which pass through some JS glue code.

LW: Then you pass those values to the spec function, those will be converted via the functions defined in the JS spec. There are trampolines that happen between JS values, to WebIDL values, then the actual WebAPI function.

LW: Lots of hops that add up performance wise

LW: Similarly when you have wasm functions that are called by Web APIs (e.g. promise callbacks).

LW: The proposal is to wrap wasm functions in a wrapper defined by the WebIDL bindings, defined in a custom section attached to the wasm module.

LW: And also, by construction it’s a 100% polyfillable by pure JS. When you implement it natively you don’t have to

LW: when you call this function import, you go direct to WebIDL values. And similarly if a webAPI calls you back. They go direct without hopping through JS.

LW: The way that this happens is that there’s not one fixed conversion -- there’s not exactly one way.

LW: It’s configurable by bindings expressions that give you a fixed set of options, and the bindings expressions can load a fixed set of i32 indices

LW: And as a diagram, if you have wasm args. And you call encode_into function, takes a string, writes into a view, and returns a dictionary.

LW: Well, we have binding expressions that mean you pass through the anyref as anyref values. To pass the string you actually have the string that you want into bytes. But to have the view, I can pass as an uint8array.

LW: When you receive the dictionary, despite its name it’s more like a struct in a c++ sense - while semantically it’s returning a dictionary it’ll get two I64s. Then we can compile it to trampoline JS code .. (All written up in the explainer with more details)

LW: while semantically it’s like a dictionary, in wasm it’s just two i64s, which makes it simpler.

LW: All of this is in the explainer in more detail. So, quick notes on values and copying. When I say value it’s the immutable concept of a value.

LW: WebIdl values are immutable. Binding expressions construct the values from linear memory. You’re doing an eager copy, because linear memory can change.

LW: However the browser doesn’t have to copy memory if the value is unshared and the copy isn’t used after the function call, or if it’s shared and you only read the memory once (racy copy).

LW: In these two cases (which is most cases), you don’t have to make copies when calling a webAPI.

LW: That’s for values, interfaces and promises are not values - it’s a pointer to something that can change, like a DOM object is a mutable thing. The value that we’re passing is reference - its the address of the object and not the object itself.

LW: In contrast, dictionaries sequences and unions, while in JS are objects, in wasm they are just compound values, which can be passed by their contents. No address / reference.

LW: Any questions about that?

(none)

LW: So, as far as prototypes go, let’s talk about what we’ve done.

JG: Making a JS polyfill to do runtime fixups - GL demo that renders two squares. Main goal of working on the polyfill is to prototype the design - having a polyfill is useful because the tools can start shipping the bytes we actually want to use,

JG: Polyfill is nice -- tools can start shipping the actual bytes, they can run those bytes in the browser today, and in the future it can be quietly optimized. And that’s good to have.

JG: In theory having a standalone polyfill as, say, a node module makes it easy for arbitrary toolchains to just say oh, I can use that.

LW: Next, update on the Rust work.

NF: We have a tool called wasm-bindgen, it is designed to be future-compatible with webidl bindings, but written before we had it.

NF: What we’re doing now is trying to put the two together and make them the same thing again. A little library for manipulating the webidl bindings custom section and ongoing work to integrate into wasm-bindgen itself.

NF: Right now, no anyref, no webidl bindings. Non-standard bindings section.

NF: And then we run a post-processing step on that, and we can turn it into anyref, or we have a polyfill. And then we have some glue that will hook up the polyfill.

NF: Long term, we’d like to not have any post-processing. Once you run rust-c, it’ll split out rust-C wasm..

NF: In the long term, webidl-bindgen will take input that has wasm with anyref and webidl bindings, and then create output that will strip out the webidl bindings and generate a static js file.

NF: But that’s going to be a while before that happens, so we want to have in-between step -- we take the first rustc and llvm, no bindings. We can convert it into the binary with webidl-bindings section, then we can have the optional polyfill for browsers that don’t have that.

LW: Lastly, and on the call, is Bill.

BB: I did an experiment in chrome and v8 to see the potential perf gain of webidl bindings and compiled trampolines. Ken Russel made a benchmark to show cost of bindings -- opengl app called web anemometer. 20k triangles drawn via chatty calls to opengl.

BB: It was a good benchmark because it only made 20 different open GL calls, so it was easier to hack 20 different calls in Chrome. Also modified import wrappers in V8, that was all doable. The graphic compares import wrappers, the native host binding on the right side, the import bindings are on the right side about 5 times the code - it does a few conversions..

BB: That’s not all of the binding code, it’s just the conversion code, JS has to do more than that. It’s worse than it looks there.

BB: So I was able to isolate the tight loop, about 20k calls. Hacked Chrome to return instantly when the C++ code in the browser that does the opengl was hit, avoiding rendering. Graph shows that webidl bindings can speed up this inner-loop microbenchmark by about 5x.

BB: It’s Interesting to note that the benchmark itself didn’t improve by nearly as much, there’s some work in the browser that we still have to track down to figure out why the benchmark doesn’t speed up for more that 5%

BB: Note, GL is a weird case because the JS glue code because the JS has to adapt the opengl calls to webgl so it does some work there.

LW: Additional uses identified. After figuring out this design w/ enough specifity to prototype, we found other features that needed it (WASI)

LW: One is asking some WASi questions, let’s say wasi wanted to define the puts() syscall

LW: The C signature of puts is that it takes a string, coming from linear memory, and returns nil. What should the WASI interface be? Obviously it just takes two i32’s (one param, one result), right?

LW: So many problems, if I implement this in the polyfill - how do we find the callee’s memory?

LW: What if it isn’t in linear memory, instead is from wasm-GC? Do I have to pass it with an array? We have a goal that these shouldn’t be required with GC for WASI, so that’s not a great solution.

LW: Which encoding should I use for these bytes? Mostly we just use UTF-8 but is that possible to require? Seems risky.

LW: What if my stream is opaque, but compatible ? I got it from the host and not I want to hand it back

LW: What if I want to run… mentioned this already. No great solution, until webidl bindings!

LW: WebIDL bindings -- should I just take a DOM string??? 🤯

SA: We are experimenting with something similar, DOMStrings don’t exist outside of the browser environment, how do we make that work?

LW: Have a slide for you with spec refactoring

LI: Similar to writev, in Erlang we split these things up in the same way for writing quickly to sockets. For DOMStrings we would want a way to do this without having to combine it later on the other side. (LI, please check this)

LW: That’s a good point. Also I have more to say on that in a minute.

LW: So that’s one use case. WASI would want WebIDL bindings just for this simple use case.

LW: What are the different types of linking? Traditional static linking compile them to .a’s link them to get one .wasm, and then one instance with a memory and a table.

LW: In dynamic linking, same code base I get two wasms -- not fully defined, say we have. At load time we link together, each one is linked together and shares a memory and table, but two instances.

LW: So the last one, the term I want to introduce is “shared-nothing linking” -- I compile two .wasm files that define their own memory and table, separately compiled.

LW: With esm integration., we are starting to load these things, people use packages, what should happen then? What kind of linkages should our toolchains produce?

LW: One option is that each package would contain the archive (.a file). But that would go poorly because the archive format is not stable -- it’s specific to a toolchain.

LW: I could also do dynamic linking, not a great option. A possible option, but not default. They still need to integrate tightly, to not clobber each other. In classic unix style you want many small processes, it’s nice to know that we don’t have issues where they clobber each other.

AR: Or malicious behavior in one of them.

LW: Right, with dynamic linking you have no isolation between modules.

LW: So I think we want shared-nothing as default . Dynamic linking is an advanced use case, but not the default.

LW: But that does raise the question of how to make inter-module calls. For scalar parameters, we have a good answer.

LW: How do non-scalars work? How do I pack anything more compound like a string? Standardize a slice operator to linear memory - dynamic slice operands for load/store.

LW: One is that we have a reference to a slice, kind of like a typed-array but pure wasm, this lets you load and store from somebody elses memory.

LW: The pros of that approach would be that if everyone is really careful, you could have zero copies.

LW: The cons are that it is hard to do this in C++ -- that view, each load and store from it -- can’t be a normal pointer, can’t load from a normal pointer. Hard to do in practice. If a pointer escapes into the wild, it needs to be a normal pointer, that’s what C expects.

LW: There’s also a lifetime issue. If I hand over a slice, and the callee holds on to it, does it have that right?

LW: This is a classic use after free bug scenario we can solve this by making it revocable - or we have encapsulation issues

LW: And that’s only for passing a subset of linear memory, but if we want to pass references we need some sort of table slice.

LW: Also, practically although we have both of these things, when you say “I’m going to take this as linear memory” or “I’m going to take this as a reference”, that biases you towards GC or not GC.

LW: Another one is to have a separate scratch space as a separate memory/table - pros you can have zero copy, cons you have to be careful on both the receiver and the sender side.

LW: The pros would be, if everyone is very careful, you can have zero copy. The cons are that this is even harder, you have to be very careful that you will write over the shared memory. You may end up with two copies.

LW: It requires very careful coordination where everyone has to agree on how to use the scratch memory.

LW: Memory that you put in the scratch memory, how long does it stay alive? Is it shared, does it have threads? So lots of lifetime issues.

LW: And you still have some of those lifetime issues. Interfaces need to be biased about linear memory vs. gc memory.

LW: I could instead go for first-class GC references. If everyone’s using GC, that’s zero-copy, and if GC is immutable I get shared values.

LW: Downside is that if you’re not a gc language, you need to allocate a gc array, and pass it through.

LW: Very much biased to GC languages and can’t work in languages that don’t have a GC.

LW: Lastly JS glue-based calling conventions -- It’s MVP compatible, we could probably make it work today. Cons is that we have to call the JS glue, and it’s hard to establish a firm ABI so the same module can work in the same environment.

LW: But the big one is that we want this to work in node.js environments but also outside -- we don’t always have JS.

LW: Solution - WebIDL bindings - if you have two modules - one of them will export it using bindings, and the other one will import it using bindings. It can simply take one linear memory and splat it into another without any trampoline code.

LW: How to handle allocations is complicated but possible. But this wasm->wasm behavior just falls out of the proposal -- we have to handle it anyway.

LW: It’s polyfillable…

LW: Also this approach allows you to fully encapsulate your memory and table -- you never have to export them as only the host sees them.

LW: It’s compatible in so far as it’s already compatible with linear memory and the current GC proposal

LW: As shown earlier, it’s the simplest ABI -- you just use these high-level types in your signature.

LW: There may be one copy when working between two linear memory instances.

LW: So this seems like a reasonable default, with dynamic linking people want to link between packages, do complex linking, this seems like a reasonable option

LW: The other solutions you can think of as maybe defaulting to pipes, but when you have shared memory, you would like to use that.

TL: For the full encapsulation of memory and table, that’s only possible if you’re not using the polyfill right?

LW: Good point. Even if you have native support, the wasm module has to export its memory to the binding layer.

JG: Either the host can strip that out, or the polyfill can … it can own its memory.

KM: One possible downside for having multiple memories for one dll, or whatever it is linking, you have to rely on virtual memory to actually fault, and that may not always work. Some platforms limit the amount of big slabs of memory that you can mmap (e.g. 64 GB on iOS).

LW: I think it’s sort of a fundamental tradeoff between robustness and isolation, and memory usage. It’s nice to have everything share when possible. In practice, you’ll run out of the fast memories and there will be a performance hit. For default, on ecosystem scale, it’s a good default.

JG: You can also use the presence of the custom section to say you don’t want to use the fast memory, to say that you don’t want to use this on iOS...

DS: I can also imagine, an offline linking where you can take … two modules that are bound with webidl bindings and put them in the same module space.

JG: WebPack would do this, essentially. In that sense WebPack becomes the “host” which provides the sandbox.

KM: If that’s happening rarely enough, then enough people would not see this and would have bad performance on iPhones.

LW: It’s not just iphones, lot’s of OSes have memory limitations there’s also 32-bit platforms

KM: 32-bit just has that fundamental problem, there’s no way around it.

LW: In the right future, we may get some benefits on the chips, where we can get bounds checking.

??: Isn’t WebIDL not the right name any more?

LW: That’s coming very soon...

DE: A response to part of this -- DOMString, that’s very web-specific, right? But no, it’s just a bad name for a general facility.

DE: If this is the blocking factor, we can make changes to the syntax, to accommodate more use cases

AR: I would say there are plenty of other reasons not to use WebIDL, but we can talk later.

DE: Interested to hear them

JG: We have a non-web IDL use case, but we don’t want to use webassembly

[Star topology slide]

LW: So if we take all these use cases together, we end up with a star topology with webidl values in the center, with spokes that consume and produce these values. For Web APIs and WASI, the “spokes” of the diagram are the identity function.

LW: These things are programmable arrows with binding expressisons

LW: So when you want a wasm calling webapi, then you route through web IDL values.

LW: Similarly if we’re going to and from JS, we just traverse the graph to compose the transformations “to WebIDL value” and “from WebIDL value”.

LW: Note: we’re not trying to solve the interlanguage interop problem, this is not fundamentally different than unix pipes.

LW: Even in unix pipes, you can send values, or references which are file descriptors.

LW: Note: nowhere are we specifying a binary serialization format. You could add another spoke, but we’re not saying that here. There are just abstract values. If you were adding another spoke, you’d just have to define how it maps to webIDL.

LW: Question that comes up, is the center node really WebIDL?

LW: We started with WebIDL and then removed a few things. We want to express things in terms of type and function imports.

LW: There are certain operations that take a type import as a procedure … where taking these core wasm types and propagating them through, so type info can be passed through too.

LW: We’re also, for our uses in wasm, we’re eliminating the “baggage” of the ES bindings for WebIDL that are only needed when going to/from JS.

LW: A lot of the types we want to use, some of them are using dynamic type checks, like range checks, and to check that they don’t have unpleasant NaN types

LW: That might not be what we put in the center, maybe just when we call a web API.

LW: Similarly, we then added some stuff. Core wasm reference types. We were specifying the binary encoding of this into a custom section. It’s a future prediction of where the proposal can go.

LW: If they’re not familiar with the web and how the web is kind of mutable, then it may be kind of confusing.

LW: So, a proposed spec refactoring to make this clear. Core wasm spec, wrapped by wasm binding spec. The minute-taker would like to note that the new name for this proposal is the snowperson emoji.

LW: We start with a core was, spec that’s unchanged and then we have a wasm bindings spec, that defines binding values, symmetric to the current WebIDL-bindings explainer

LW: The webidl spec could add a wasm bindings section that is symmetric to the es bindings section, so there’s a trivial no-op conversion between the different parts. There just needs to be extra checks.

LW: So, it would not be a no-op section, it would say stuff that’s not in the wasm snowman bindings spec.

LW: In the JS-APi spec, from the web perspective this is what ties it together. It reads the imports, instantiates a module .. is it a webIDL binding, .

LW: Is it a webidl function, if so then take that and produce something lower down at the wasm level.

LW: Then what’s nice is that WASI could use the wasm snowman bindings spec directly.

CA: Name it host binding?

LW: The other side might not be the host, it might be another wasm module.

DE: The WebIDL spec defines the core data types -- e.g. a dictionary type, independent of the corresponding ES bindings spec. Are you proposing to move this to the wasm spec?

LW: I think it would make sense to start with the latter and move to the former.

DE: Hesitant about having multiple specifications that implicitly specifies each other. WE should figure out how to give this a single definition and not multiple definitions that reference each other.

LW: It’s not just… Even after a maximal WebIDL refactor, we can avoid… [furious gesturing towards the slides].

DE: factoring that out seems … having multiple documents for the same thing where they don’t reference each other concerns me, because they could get out of sync.

AR: But one would reference the other still right? LW: It could still be..

DE: If we move the parts of ES into webidl spec then that’s not good.

AR: You can also have multiple specs describing the same thing and define mappings between them.

LW: You can have explicit isomorphism between the both..

TS: Would it make sense to look into factoring out shared core into another thing that the snowman bindings, wasi bindings, webidl bindings can all share.

LW: This is a spec engineering discussion that we could perhaps hash out in a smaller group.

AR: The core observation is that the 90% of the webidl spec is not relevant here...

LW: I wouldn’t say it’s opaque...

DE: That’s the part that I’m not convinced about, and I would like to register my hesitation here

FM: So there’s another issue in that the shared-nothing binding, it may bring in requirements that aren’t in the remit of WebIDL itself.

DE: For example?

FM: WebIDL is limited, it doesn’t have nominal types, doesn’t have generics, many things you’d expect in a proper linking format.

JG: At the high level, if the two specs meaningfully diverge, that means they’re solving different problems, and that’s a feature rather than a bug.

DS: If you have… I think what you’re saying is that the core snowman IDL spec is not expressive enough to express every concept that two languages would use. If you have two modules in the same language, they want to interop across shared nothing binding layer. Nothing stops the language coming up with an ABI.

FM: That wasn’t what I was saying - a simple example is if you have a credit card processing API. This API does not belong to the web, the people that design the web they don’t have to design this api - if you are defining access to the credit card API you are setting up requirements that are not exactly required.

DE: Right. I think WebIDL does have nominal types but the bigger point is that WebIDL is designed for the kind of interface that we want to expose idiomatically to the web and that’s a strength. Would like to avoid a world in which we have duplicate specifications for similar things.

AK: to be clear, not a worry about making changes to webidl. It’s more about where things belong, it’s an organizational issue.

[Luke agrees]

LW: What would this look like if it’s own thing? Pull out all the core wasm reference types including type imports.

LW: We also need signedness, i.e. splitting i32 and u32. We have wasm reftypes, numtypes, strings, bytes, packed arrays, sequences, dictionaries, unions, enumerations, callbacks.

SA: Have you thought about exceptions… maybe?

LW: You could import the exception tag for the thrown exception, but it’s pretty much orthogonal.

AR: In general you need a language to describe module signatures. So I think at least in… you want to describe everything a module can import and export, but other things too. Types too. It’s another way it’s different from webidl bindings.

LW: Yes I think outside the context of the web where there’s not the upper level of WebIDL, yes, you could see it that way.

LW: This is similar to GC typedefs, the only values would be references to that…

LW: So it would be really cool if gc typedefs could be used instead of using these record/variant/etc types.

LW: But, there is this subtle difference of - things want lablels, and that hasn’t been decided. Either one would work I think

LW: Just an example, how you might consume these (see example in slides)

LW: In the simplest case, you just have a reference to something, it maps directly to wasm types.

LW: If I have a signed integer type there’s only one way to go through it, and because you have signedness there’s only one way to go from it.

LW: An interesting one is string. You can imagine creating a string from two i32’s, but from a string, what you’d like is to reference two allocator functions, one to allocate and one to free. Not the only way to do it but the host should be able to allocate into the callee’s memory.

LW: That’s a way for the caller to say I have my linear memory, and just one copy to the other one’s linear memory.

LW: Another way to produce the same type is I have a reference to type of U8s, if these are both read only and on both sides I’m passing it as read only, no copy has to happen in that case.

LW: Lastly if I have some opaque reference to a host type, and that host type is a string, that can pass through directly.

LW: Three different ways to take and produce a string, and you can mix and match them.

LW: For a record I can do the same, ref struct or string+string.

LW: Note that the fields themselves are strings, so you might need to bind them also using the webidl bindings API.

LW: Similarly to consume them, I project out the fields and I can decide what to do with them

JT: Looking forward toward future use cases that want more performance, how would you extend this for no copies?

JG: You know that the callee has linear memory you probably don’t want shared linking, this is for the case that you don’t know that you have linear memory

JT: You don’t know what each other have, I’m arguing for when the caller and callee are compatible with not making a copy.

LW: I share the ideal that it would be nice to opportunistically allow memory sharing -- but I don’t think it’s possible in a generic way. Needs toolchain and convention support. I think in that case you need to opt in to a different linking model -- here we need to define reasonable defaults.

LI: At the embedder level, you could do COW protection, not have it act like a copy when it needs to...

LW: For very large copies that may work

DS: Luke was getting … general case is, in full generality it’s hard to opportunistically do this. But within the context of a certain embedding, maybe it’s in one process, or maybe there’s shared memory. The other is -- in the case like the web, when you have bundlers, you can do offline translation, you start with shared nothing binding, then that describes the interface, you can lower that to just reading and writing, lower away the copies, because then you have information about all the modules.

JT: Effectively the static linking model.

DS: Right, you’re reducing the fully untrusted shared nothing model into something faster.l

JG: Yes, the main idea is that you expose this data roughly declaratively and that many things become possible to tools and compilers.

JG: Now that you have an idea of how this should work, it’s the same as passing a string out to DOM.getElementByID -- that’s a thing that we’ll want to solve in one or two cases, but this decouples the problems. We can focus in the browsers from optimizing everything.

LW: Great discussion. That was just a sampling, but there are other interesting cases like sequences.

LI: Since the webidl values are mutable by default -- does that mean that the embedder should detect that we don’t need to go through the copy loop?

LW: In that case, the value is the reference so it;s the address of the object you want to pass around

LI: If you’re passing a record and it has a loop back to you, should it know not to do a copy?

LW: Can you repeat the use case?

LI: So you have an entire struct copy, should the embedder detect that you will be called back with the same value, should it not do a copy and such?

LW: I think you’re in the context where you call an API with a record value which it then passes back to the caller at some later time.

LW: In that case you have to make a copy, it would have to make copies, and have some knowledge that whatever you are referencing doesn’t change when you are calling it back.

LI: It will have to have the swift-style closures to say that you’ll be called immediately, or whatever.

LW: The good thing is that at instantiation time, you see the caller and the callee, so you can generate an appropriate optimized trampolinne.

LW: new star topology. The thing in the middle is snowman binding types. Binding WASI is identity function.

LW: WebIDL spec would have a wasm binding, that can specify whatever checks, whether they are range checks or checking for illegal floats

LW: And JS API would specify how to convert from JS to snowman and back.

LW: And we’re still not attempting to solve the general inter-language interoperability problem -- we’re just copying values.

AK: Can I ask about JS arrow … it seems that it’s surprising that that’s in JS API spec, it has to duplicate what’s in webidl.

LW: Yes that would be a thing we’d want to factor and share.

AK: You’re saying it’s the same thing in webidl, but everything the same but specified in terms of JS.

LW: Yes, if we didn’t share with JS, we’d have to re-specify and do so symmetrically...

DE: I would expect this to match the fallback when the types don’t match in the webIDL case

JG: You can also add a path from JS to the WebIDL bindings node…

LW: But that would kill the nice star diagram [polite chuckles].

FM: Most of these have already been addressed in fact. This is not actually like corba IDL… no serialization format, also different from capnproto and protobuf. No other IDL structured this way, kind of unique.

FM: The bindings expressions are programmable, but they are declarative - so it’s up to the embedder to figure out what this means. The set of binding operators could also expand without affecting the core specification.

FM: We’ve already talked about copying… I think it’s important to note that we’re not anticpating magic here, if you can avoid copying it’s because it’s obvious you can.

FM: Apart from things like sharing, we don’t anticipate any interaction with threading.

FM: Of course, we are trying to be fast here....

FM: Finally, the shared nothing linking is a really important use case because it supports the ecosystem - supports the way people use npm right now. AK:

FM: So, if you have your own API then you are going to be encouraged to use this. Hopefully this anticipates a lot of the questions to follow.

AK: Want to dig in on … back to TPAC and before, how this interacts with webAPIs. This makes everything but webAPIs clearer to me, how does optimization work in a way that’s transparent, now it has to punch through the star, where it has to go through everything. Last diagram makes everything but webAPI clearer to me.

LW: I think there’s going to be a semantically-defined predicate -- for a given signature, is a call going to be compatible?

LW: If it says yes, then we have a mapping -- then it describes how to map values between the ends of the star topology.

LW: Then the fallback case, that we need for a variety of reasons - if it says no, we need to call web APIs not directly but rather we create JS objects and call the web API JS binding.

AK: We always translate to the binding values, then maybe there’s another step, but otherwise it can go through directly.

JG: That shouldn’t be too different from the WebIDL case ahe same patternnyway, that is the signatures don’t match you do want to fall back to JavaScript anyway, it’s orthogonal, but follows t

FM: There’s something else though, if you think of a C application calling opengl. It goes through webgl, that’s inevitable, this won’t solve that. The step between webngl -> opengl can’t go away.

WVO: The entire point is that the engine can recognize a specific situation and optimize it away.

AK: For webgl case it can’t do that...

LW: We don’t have a way to punch through from here to the underlying opengl, barring a real “sufficiently smart compiler” (SSC).

AK: That’s a different… that’s someone else’s camp, however we expose webgl to the web. Given that webgl objects are used to manage objects, there’s different ways to do these things.

JG: I think in the demo that Bill mentioned and in the case Francis mentioned, that’s C code talking to OpenGL, so we have C code converting from OpenGL to WebGL...

SA: Have you thought about shared type definitions between modules.

FM: Yes. No resolution to this at the moment.

LW: Multiple modules can export the same type, exported from the same host… how to factor it out. It’s an optimization problem, not much to do there.

TM: Not a question, but a voice of support. From a non-web embedding, what you have designed looks very similar to what we have designed in Lucet, so we should join forces

AR: Same for me, in terms of the IDL. We’re designing an IDL for interop, totally in favor. However, I am more skeptical about binding expressions -- three concerns. They all start with Comp: complexity, completeness, composability.

AR: Complexity -- one thing to note here, you always pair up two things, the number of cases grows quadratically with the number of representations supported, grows in terms of implementation complexity and spec complexity.

AR: That ties in with completeness. How do we know how many of these mappings are necessary? So many representation strategies out there -- when do we stop, how do we avoid bias towards a few privileged lang / impl strategies?

AR: That is already a problem for primitive types like strings -- the proposal assumes it’s a declarative description, you need to say where is something, but some implementations may need to call back into user code.

AR: At that point this doesn’t support it, so what you actually have to do is serialization. My prediction is that most languages will have to do some form of serialization to be able to use this.

AR: That ties in to the third concern, for vectors and records. If you imagine strings -- one is the pointer one is the size, if you want to transfer a vector of strings, maybe this is C where they’re zero-terminated, but you may not have sizes, so you need a vector of sizes too. So you need to create that as a temporary.

AR: And I think you can come up with single cases and solutions for all of these issues, but it has a tendency to grow into a gigantic zoo. I am skeptical about scaling.

SA: Maybe as a solution for that, instead of having specific types like a string, we define abstract strings, and then we defer to the compiler to figure out what type it is

AR: Right but then you’re basically back to something akin to serialization.

JG: This gets down to -- what are we promising is going to be fast, if we say that everything has to go from one edge to the other, then it’s n squared. You may need serialization then, if there’s not a useful case, then don’t implement it.

JT: Seems like, having this framework, you can find the cases that come up in real life, invest in optimizations if it’s worth it (in engines that want to do those optimizations) -- otherwise, you can always fall back on something slow, that’s fine.

AR: I agree you can do that, but this is against the spirit of wasm which is predictable performance.

TS: One argument, there are some cases that it makes sense to support some cases - for example the c++ api. Another case you are probably familiar with is the way strings are stored in JS engines, I don’t think there’s a way to save you from doing copies. I would be surprised if there was a way to do this, and that’s what you are suggesting here..

FM: Yes, also I would like to say (with shared nothing linking) that there’s an implied ownership boundary that governs expectations about what can be expressed across that boundary. You can’t transfer ownership of memory easily -- expectation is a hands-off implementation of an API rather than linking different implementations into a module.

LW: These are valid sources of skepticism, we will be experimenting heavily before standardizing anything, we have a very polyfillable solution so that makes it easy to experiment with

JG: I also want to mention -- this is an 80/20 solution, it doesn’t make everything magically perfect, but it makes state of affairs nicer. If not, we should fix. This is more useful to standardize on than nothing. Otherwise we will choose JS or C FFI.

LW: So! A poll.

POLL: Guidance from CG to do more work to refactor the web IDL bindings spec in the manner proposed (with future poll to actually accept refactoring).

Conclusion

Resolution accepted unanimously.

15:55 “10 min” break

Discussion: process document update for clarity (what are the requirements for each phase, what does english spec mean, etc.) (brought up earlier today)

16:20

BS: [presenting process document]

BS: We were talking about changes to the phases document, particularly with regard to what "english prose text", what does phase 2 mean, what does phase 3 mean.

BS: Perhaps we can just go through the phases [scrolls through phase descriptions].

BS: Phase 1 [reads description].

TL: Surprising that design markdown files are optional. Really appreciates markdown design docs.

BS: Historically we’ve been doing that for all proposals and maybe we should be including that into the requirements formally.

KM: Suggestion to vote only on items without unanimity.

BT: I think we should collect a set of things we'd like to change and make that an agenda item for a future meeting, since this is so foundational.

JT: And because not everyone might be here.

AR: Note that all people that work on proposals are probably in this room…

BT: Still the short notice doesn't seem appropriate for changes to this document.

[general agreement to follow BT/JT's suggestion]

BS: Phase 2 [reads description].

BS: There is a bit of disagreement about the need for english spec text (as opposed to design documents). Do we want to make a change?

DG: What is the spirit of the stage, where is a stage supposed to be? Has it reached the necessary consensus to spend time on spec text?

BS: Just to read the rest of the item…[reads the rest of the definition]

BS: Purpose of spec text is that an implementation can use it to implement. Reasonable that if a markdown doc is detailed enough, it’s sufficient. What do people think, do we relax requirements?

BT: What is the difference between Phase 1 and Phase 2 if we drop the requirement of "english spec text"?

AR: Difference with phase 1 and 2 is really nothing then, except expectations.

BS: The expectation is that a design doc is more detailed in phase 2.

JT: It sounds like there are two potential changes people are bouncing back and forth. One is does it need to be a delta to the spec, or a separate doc. The other is does it need to be spec-style text versus a design document.

AR: I think you don’t want to write down all the spec prose, as it’s super-tedious. Actually in practice it’s easiest to write after the formal notation, as it’s an easier and more mechanical transformation.

JT: Agreed, just pointing out the difference.

AK: AR, do you think this is a problem in the process or a problem in the documentation?

AR: Fine with changing doc to match practice. But need to make sure to insert the correct new requirement somewhere into the phase process.

BS: Any thoughts from implementors? In my personal opinion, high-level spec has been enough.

KM: I disagree, FWIW. The explicit nature of spec text / notation is really valuable in resolving ambiguities.

TL: It sounds like that was for all of Wasm MVP? What about smaller proposals?

KM: Problem is mostly in the JS bindings, as there’s a lot of potential variability (e.g. order of observable operations). Especially important to make spec text for JS API.

DE: I thought phase 2 was for prototyping, and phase 3 was serious implementation. Is that right?

KM: Fair, but I just wanted to say that it was hard to understand the “spec” for the JS API.

DE: Historically the JS API was a special case, which we won't repeat again.

BS: Yes perhaps my statement was too broad, spec text can be valuable; just meant that in many cases it might not be necessary at this stage.

DG: Given that implementation at this phase may cause significant changes to the design, it seems good to leave room for prototyping in this phase. Having spec text makes sense to me as a phase 3 requirement once the design has settled.

BS: Sounds like we mostly have agreement for that modification: phase 2 can be entered with just a detailed design document and the spec text requirement can be punted to phase 3.

[general agreement]

BS: Phase 3. [reads description].

TL: Aren’t we making the formal spec text an entry requirement for phase 3, and that the formal notation is just phase 4, right?

AR: Yes, off by one, sorry.

M2: Regarding implementations and testing -- if browsers have already started prototypes and implementations in this phase, it would be useful for these tests to be submitted as web platform tests. Should these tests be ready at the point of moving to phase 3?

DE: When there’s requirements that there be tests in some form, we should be flexible about whether they are platform tests or not. E.g. there could be implementation-specific tests that are on track to moving to more standard tests (e.g. web platform tests).

DG: Is that a change? Isn't that already the case, that we only require testing against implementations at phase 2?

BS: Phase 4. [reads description]

AR: Confused about needing chair approval to do a step earlier than necessary.

BS: I don’t know what the purpose is.

SC: Perhaps the concern is about merging the work into the main repo.

BS: [continues reading]

KM: Point of working group polls were to be a kind of stopgap / safety mechanism between WG and CG.

BS: Weird though as I am chair of both.

DS: Working group sometimes has different priorities which could only come to light when it takes responsibility for a proposal.

BS: Or there are people who aren't comfortable talking about something in the CG but are in the WG. What about the "Web VMs" requirement? I know there are "non-Web" VMs in the room, do we want to make some change here? Not sure how we would do that.

AR: We should just say “production VM” and leave it vague.

TL: For comparison, toolchain implementations aren’t qualified at all. It’s just listed as “toolchain”.

JT: We might just want to make sure that the definitions are stable and not subject to change as the CG changes.

FM: Does one of the VMs have to be a browser? Or do both?

DG: Similar situation with regards to WASI, explicitly targeting non-web use cases. Committee will decide for requirements on VMs on feature-by-feature basis.

KM: Reason for two VMs originally was to reflect a web-wide consensus.

DE: I want to second KM's point. If we specifically want to enfranchise non-Web implementations, rather than weakening this, add a requirement for additional non-Web implementation(s).

AR: Repeats comment that we don’t really need to overspecify; consensus process is already adequately expressed by voting. If the future is weird and there are 500 non-web VMs we should change process anyway.

[Microphone hijinks ensue]

KM: Only concern is that because voting is based on in-room quorum, it would be unfortunate if something goes through just because of meeting attendance...

AR: But if there's that power imbalance, then they can change the process document anyway.

BT: I think that "production VM" gets at the idea across well enough.

SA: One key is just to set expectations. Non-browser VMs should have appropriate expectations on what the requirements are to reach the next phase.

KM: Assuming that this committee is OK with having things that go to stage 4 with proposals that will never have one more in-browser impl….

JT: "Production VM" is something that can reasonably judged by the room, but it's not the only criterion on this list. The process also requires the CG to judge whether the proposal is ready, and the room can discuss that as part of the consensus process. That said, what about saying "two independent production VMs"?

DE: Perhaps misses the point. Maintaining correspondence among browsers has been difficult and process helps. Weakening this link would probably weaken correspondence.

AR: I think this is missing the point of this criterion. The point is to prove that it can be implemented properly. Once that's proven, and it is voted in, it becomes standard, and everyone should implement it.

DG: We appear to be trying to solve two different problems. One is the standard, the other is the web standard. Are there different processes? Point of the web standard is for the browsers to all implement it, and that’s not the purpose for the other VMs.

JT: The point of standalone implementations is to implement the same standard that exists on the web.

KM: If we go down this path, it could be that the wasm standard diverges from wasm-on-the-web. Part of the reason for wasm popularity is the consistent web deployability.

JT: Consistent web deployability is not the purpose of this requirement, it’s just about measuring the implementability of the proposal.

KM: Begs to differ. Take as an example, proper tail calls -- part of ES201x standard but only implemented by one browser and no path to adoption, even if another non-web VM implements it.

BS: The point seems worth addressing, let’s continue to discuss but perhaps later due to time constraints.

BT: IMHO wasm will have optional features, and the web wasm won’t necessarily have all the features, and that broadening the language could be valuable.

AK: Let’s really punt this discussion plz.

BS: OK!

BS: [reads phase 5]

AR: This is the point where browsers could still shoot down something they don't like.

KM: Note, W3C consensus is actually a majority vote.

AR: In wasm WG we try to apply actual consensus.

BS: We don’t tend to apply strict W3C process.

[There was a readout and consensus-check on the “Conclusion” section below at this point.]

AR: Additional note. How does this process apply to proposals that aren’t core VM features, e.g. C API or text representation changes. Not clear, should discuss at some point.

BS: Good point, let’s take up at a future meeting.

Conclusion

Set of proposed changes for a future CG meeting agenda item: For phase 1 entry, make high-level description in markdown file(s) required. For phase 2 entry, make design documentation (in markdown file(s)) required, and add “spec text” to the “NOT yet required” item. For phase 3 entry, explicitly require “full spec text”. Add some prose to phase 2/phase 3 about what "test suite" means, e.g. to make Web Platform Tests one possibility. For phase 4, drop the parenthetical about formalization and reference interpreter being doable early at the WG chair’s discretion, as it goes without saying that any step can always be done early.

Separate proposed change for future discussion: For phase 4, change “Two or more Web VMs implement the feature.” to “Two or more production WebAssembly VMs implement the feature.”. (“production” is a property to be judged by group consensus.)

Adjourn

17:05

Day 2

Opening, welcome and roll call

  • Adam Foltzer, Fastly
  • Adam Klein, Google
  • Alex Beregszaszi, Ewasm/Ethereum Foundation
  • Andreas Rossberg, Dfinity
  • Andy Wingo, Igalia
  • Arun Purushan, Intel
  • Ben Smith, Google
  • Ben Titzer, Google
  • Conrad Watt, Cambridge University
  • Dan Gohman, Mozilla
  • Daniel Ehrenberg, Igalia
  • Deepti Grandluri, Google
  • Derek Schuff, Google
  • Francis McCabe, Google [remote]
  • Heejin Ahn, Google
  • Jacob Gravelle, Google
  • Jake Lang, Ewasm/Ethereum Foundation
  • Johann Schleier-Smith, UC Berkeley
  • Johnnie L Birch Jr, Intel
  • Josh Triplett, Intel, Rust
  • Keith Miller, Apple
  • Kevin Cheung, Autodesk
  • Lars T Hansen, Mozilla
  • Lilit Darbinyan, Bloomberg
  • Lin Clark, Mozilla
  • Luke Imhoff, DockYard, Erlang VM
  • Luke Wagner, Mozilla/asm.js
  • Mark McCaskey, Wasmer
  • Michael Starzinger, Google
  • Ms2ger, Igalia
  • Nathaniel McCallum, Red Hat
  • Nick Fitzgerald, Mozilla
  • Pat Hickey, Fastly
  • Paul Dworzanski, Ewasm/Ethereum Foundation
  • Ryan Levick, Microsoft
  • Sam Clegg, Google
  • Sergey Rubanov, fintech/blockchain
  • Stefan Junker, Red Hat
  • Syrus Akbary, Wasmer
  • Till Schneidereit, Mozilla
  • Thomas Lively, Google
  • Tyler McMullen, Fastly
  • Wouter van Oortmerssen, Google
  • Yuri Iozzelli, Leaning Technologies

Opening of the meeting

Introduction of attendees

Host facilities, local logistics, code of conduct

Find volunteers for note taking

Lars falls on this particular sword.

Adoption of the agenda

Proposals and discussions

Bulk Memory (½ hr)

Slides

Ben Smith presenting -- see slides.

Bulk memory is in V8, SpiderMonkey, JSC.

Most things in place, Phase 4 imminent.

Table.fill is in the reftypes proposal because it needs an init value; awkward and maybe we could have sliced proposals differently.

Passive element segment encoding

Discussion of encoding of passive and active-with-index element segments. Current encoding of passive segments allows null references to be encoded, but encoding is expensive (3 bytes) and can really only specify functions.

TL: Can we add more bits to allow for more compact representation? [Yes, coming]

Discussion of declaring segments for ref.func, notably.

Many combinations, but not all combinations are useful.

AR: "Declared" is sort of redundant; being in an element segment could be enough to declare the entity as reffable.

TL: Tools probably will not use all of these AR: Not convinced about that, since some encodings are just compressions of some of the general cases. Really there are fewer cases. JG: Any given toolchain will probably care about only a subset but different toolchains about different subsets.

BS: Some proposals [see slides for details]

AR: Are there really any issues with the existing encoding approach? Just more bits. Plus it's useful to mirror data segments in bit encoding.

TL: It just moves complexity around? Ben's proposal makes sense to me but may not matter.

BS: We can push some of the detail work to a PR maybe

LW: I'm happy with this

BS: Moving on to the copy direction issue ...

Possibly simplify runtime conditions for copy direction (discussion) and for OOB checks

AR presenting

Slides

Edge cases for bulk data instructions. Two proposals for simplification related to bounds checking and overlap checking. Independently useful and perf benefits / simplicity.

Reverse copy condition is somewhat complex but could be simpler though it will trigger in some non-overlap cases.

LH: Not convinced that that’s true, you can’t observe the order of the writes, they will eventually hit the memory.

AR: The weak memory model doesn’t tell you anything about it.

LH: Spec says you have to copy in a certain order.

AR: But weak memory, so it doesn’t tell you much. You’d need to do atomic writes to observe an order.

LH: The order the writes are written to memory is observable. The writes have to happen even if the order is non-deterministic. Maybe I misunderstood -- are you saying it is too strong?

AR: The only way you can observe something is if you trap in the middle, otherwise weak behaviour applies.

LH: OK.

JS: If your data is out of cache, then you may see this.

CW: It’s true that in practice, any real implementation will fill up the reordering buffer. But the arch spec, nothing says you’ll be able to observe any order.

AR: You cannot rely on it, no guarantee, so you can’t write program that depends on it.

AR: libcs use both strategies for copy direction. Unclear if there's a perf difference. BS/LH/KM: unlikely that there's any real difference. NM: What about load/store multiple on ARM?

LH: on 32-bit ARM, there are load/store multiple in both directions. (LDMIA vs LDMDA, eg)

BS/AR: Discussion about OOB handling and code generation

AR: Poll?

LH: No real objections to making the change.

POLL:

SF F N A SA
2 15 16 0 0

change passes

(Meta-tangent about how to vote and vote biases.)

AR: Onto OOB conditions [see slides]

Dynamic cost for zero-length OOB accesses: if the length is zero we must still check src+len so that we can trap if that's OOB.

BS: technically a breaking change from MVP if bulk memory changes the existing active segment initialization

AR: I will continue to argue that changing something from failing to not failing is not a breaking change

BS: Unlikely to matter in this case

AR: simplifying the OOB semantics for length 0 will also benefit the spec since it removes the difference between shared and nonshared memories.

LH: If I implement memory.copy using memcpy, I don’t control which direction it copies. In this case I would get an exception if I copy from high-bytes and low-bytes… (LH retracts comment but is uncertain about effect.)

JT: Your low-bytes and high-bytes are in the same location if your length is zero.

POLL:

SA A N F SF
0 0 15 12 7

change passes

Multi Value (update) (¼-½ hr)

AR: Spec is complete, nothing has changed, V8 has had this for 18 months, none of the others are done yet

AR: We will materialize multiple values as arrays in JS

BS: I believe some non-web vms have implemented this…

[widespread tongue-in-cheek “oooh”ing]

Threads (1 hr)

CW presenting

Slides

CW: We have a memory model, paper under submission to OOPSLA. Wanted to get academics interested in verifying it.

Paper draft

CW: Discovered bugs in JS model. Proposed fixes to TC39, accepted.

CW: Started putting stuff in threads repo

CW continuing to present

CW: Problem of the day: fences. Currently all atomics are sequentially consistent. Adding release/acquire later will require changes to compilation scheme

AR: Technically, will be a breaking change to the memory model

TL: We already have a similar breaking change in the toolhcain, which is that we compile C++ code that uses atomics at the source level, we strip out atomics to normal loads/stores. We set a flag in the metadata, atomics are stripped out -- don’t link w/ anything else.

CW: Not just model, you can’t use them together.

LW: Shared memory is not compatible, it can’t be interpreted in a shared way. So this isn’t a problem. But your case would be.

[back to slides]

LW: Why was compare_exchange_strong added at all?

CW: It’s not required at all. It’s technically correct already. There’s no correct thing to put here currently. This is the choice that’s made currently.

[back to slides]

HA: Currently in emscripten arch we do the conversion, other arch we don’t do this -- we just crash in LLVM backend. The reason we chose was we wanted seq cst -- we don’t need barriers for seq cst atomics because they won’t be reordered anyway. We want to prevent instructions from being reordered, actually C++11 atomics do not have that constraint, but wasm can’t do it. I heard from people that it’s not in wasm. That was the reason we chose to do idempotent rmw conversion. That is the history.

CW: That’s reasonable -- pragmatically that looks less surprising, but it’s still possible to write a degenerate program that exposes the issue. For a real program it’s probably better than nothing, the cmp_xchg does do some ordering, just not all ordering required. Not saying it’s a bad decision, given the instructions provided it was a good choice, but we need a new instruction to make it better. We can treat fence as stronger than what we reorder around it.

[back to slides]

What about Rust?

All: Rust uses the same memory model as C/C++.

CW: It’s a little more subtle than that, it’s about preserving source semantics. If you compile a source language to wasm the source semantics should be preserved in the translation.

AR: In an ideal world, we’d be able to specify memory model that -- you currently get an accidental guarantee that won’t hold up in the future. Ideally we wouldn’t break it, but we can pretend that it isn’t there.

HA: You’re saying implementers can use mfence.

CW: Lars is right, you should always say it’s a barrier.

JT: For purposes of evaluation, we should say you must turn it into a barrier.

CW: It’s about level of abstraction, I think of wasm memory model. You have to pick to obey the memory model. We shouldn’t do anything other than mfence.

HA: Even in the current spec, we should compile to mfence then.

CW: Yes, for the same reason that LLVM compiles to CAS

AR: But the engine will support the more relaxed model or not. If the engine doesn’t support it then it wouldn’t need to. You’re changing the engine anyway.

JT: You may still want to, AOT compiled cached code, separate compilation with different versions. We only have seq cst atomics, but other memory operations.

CW: Please make this a barrier!

HA: But you’re saying it’s a nop…

CW: Right, just from the perspective of memory model. Only interesting to memory model researchers. In C/C++/LLVM technically, if you have a program with non-atomics and barriers in it, then barriers do nothing. In practice code thinks barriers do a lot more, it makes the loops work. Technically buggy, but that’s why you want the fence there.

HA: I agree, that’s the historical reason we put the CAS there. Does wasm atomic spec require they don’t reorder with non-atomics?

CW: Technically doesn’t, but it’s a good heuristic.

HA: I remember JF mentioned, it’s not just a heuristic, in actually in the spec too.

CW: You can’t actually make that guarantee or certain compilation schemes become invalid. [describes ARM case] This is a thing JS got wrong, this is a guarantee that need to be weakened.

HA: Is that in the spec? In practice it’s good to do that?

CW: In practice you will not get this guarantee.

DE: Can we go back to the toolchain question? The motivation for introducing now is dynamic linking but are there other options? Is dynamic linking compelling?

AR: Asking for recompilation is a breaking change...

DS: There's an analogy here with shared vs nonshared memory. Maybe there could be a bit somewhere that says something about compatibility of the memory (or something else) with the rel_acq memory model. But it's appealing to avoid that situation.

JG: Waiting for your dlls to be recompiled by the vendor is not appealing.

DE: If the toolchain people are happy then great

DS: I am happy - this is what I want

POLL:

SF F N A SA
18 12 7 0 0

KM: Are you looking for a champion re an encoding and the type then?

CW: I guess so, plus also for JS...

BS: Not too difficult to add this probably (in wasm anyway). Goes into the current threads proposal.

Break until 11:45

Tail Calls (¼ hr)

AR: The only update seems to be that V8 has implemented tail calls, I don’t know of any other engines that have done so. Had hopes for Apple. :)

TL: There’s an approved patch for LLVM for Tail Calls.

Collaborative benchmark suite (½ hr)

Slides

BT performing presenting

JT: Do you mean benchmarks that are open source, or benchmarks where source is available.

BT: When we do decide to do this, we should also add their source code.

NM: This is a licensing question.

JT: We should be clear about the licensing requirements for the benchmark suite. Upside is that we can redistribute, package for distributions, etc. Downside is that we can’t include things like SPEC benchmarks.

[back to slides]

DG: Would it make sense to say that microbenchmarks shouldn’t be included in aggregate scoring?

BT: Yeah, we should say scoring is explicit about not weighting categories.

[back to slides]

BT: We want the applications to be headless (without a GUI).

[back to slides]

KM: I would like to raise a second reason for open source -- understanding what the benchmark is actually doing, not assembly is very valuable. Licensing -- I want to make sure there’s a license for others to do things with the code. Some businesses can’t even look at the benchmark when it’s overly restrictive.

JT: There’s a spectrum of nuanced policies here. We shouldn’t include anything that’s binary only. Ideally we want it to be all open source, but that may be too limiting (unity for example). Maybe we want separate benchmark repositories for each.

BT: Sounds good to me.

NM: I’d like to speak against that. I’d prefer that we only have open benchmarks.

JT: I would prefer that as well. I’d like to clarify my argument, that I’d prefer open but if we allow proprietary benchmarks then they should be in a separate repository.

KM: Downside is something like Unity. Someone who doesn’t have source can’t argue for the benchmark when they don’t know what it’s doing.

NM: If you’re showing up with proprietary tool that you want to work great, then you should provide features to make it better for the tools, rather than expecting others to do it.

TS: Thoughts on latency?

BT: On web we care a lot. Hard to do, instantiation and compilation before you start to run. Technical details here that I’m not clear on. We need to measure the right thing, if we don’t measure streaming compilation in the right way, we get wrong numbers.

BT: Memory consumption is also important.

J_: Focus of the benchmark, is it about wasm to native, or high-level language to wasm.

BT: Depends on who you are. For wasm engines we’re close. For native we’re far. We want to do both.

DS: Yes, we want both. For toolchains, we do the same thing as engines, we are responsible for source -> wasm, engines are wasm -> native. Both have to be good if we want to compete with native. Need consistent base to do analysis here. Which bits we hold constant, etc.

J_: Would you have a separation between the two?

DS: Maybe, especially with microbenchmarks. I think with applications, if we think it’s important then we want to test it in both. We use whichever one makes sense.

JG: It’s good if we have the source code, so we can recompile and test. It would be nice to have a pinned version of the binary as well, so you don’t have the ambiguity of which tool is used to compile.

BT: Yes, source code and built binary. Don’t update the binary often, maybe once a year.

KC: What’s the objective here -- engines or tools?

BT: Both

KM: My experience -- lots of ideas here, more numbers you track the worse the statistics make your life. Have each microbenchmark measured separately, if you don’t have one canonical way of getting a score. Then you have confidence, and the benchmarks are lying to you, and you get noise.

BT: Right, this is the methodology point. We need to have a good way to display the statistics. We need to show line items with variance, and have it not affect the overall variance.

KM: For JS benchmarks it is common to have noise between 3%-5%, if you’re trying to diagnose 2% regression, it’s important but it’s hard to tell because the noise makes it difficult. The more numbers you look at, the harder it gets. As overall methodology, I highly recommend not having a lot of numbers.

DS: We could have a bunch of benchmarks, then we don’t say that we have a canonical weighting. Some engines care differently, maybe startup time is not as important. May avoid press benchmark wars.

NM: Second that -- weighting represents value judgment. Having a big fat score on the webpage, without any nuance is counterproductive.

JT: Second point is valid, no canonical weighting. We should have canonical measurement techniques for individual benchmarks. Control for noise, benchmarks to run, statistical tests to run. Same engine measuring the same test over time and getting reasonable results. Two engines comparing on one benchmark, to learn from each other.

BT: I like that idea (canonical measurement strategies). Traditionally you run many times, but once you have JIT there’s a warmup phase and that makes things tricky. Maybe engines can turn off tiers, maybe those should be part of the suite.

POLL: Should we create a repo to start the collaborative benchmark process.

SA A N F SF
0 0 0 17 22

Break for lunch (1:30)

Slides

Heejin presenting

AR: In this example, the try catch itself, is that somewhere else outside this block?

HA: It’s not restricted, this whole block is probably between catch and end, but we can store except_ref anywhere so it’s not restricted.

AR: One observation: If this is catch body, then you want multi-value, you already have multiple results, you also want the parameter on the block to pass the except_ref in.

HA: Good point, we currently do this in the toolchain, we assign except_ref to local, but this is just temporary. After we have multi-value, we can avoid that, saving some instructions. I intentionally made outer-block multi-value, but C++ doesn’t require this because C++ is single value. Ideally after multi-value we can make it better.

AW: Throw creates except_ref on previous slide. It doesn’t continue, right?

HA: Throw does not continue, it’s similar to throw in many other languages.

[back to slides]

LW: A few meetings ago, some problems that motivated except_ref. There was huge code duplication to use 2nd-class exceptions. Have you been able to see whether this model fixes the problem.

HA: Yes, this is a lot easier to support - this should have been the way to go in the first place. Most importantly rethrow cannot go anyway - because of that we should limit rethrow. Making exception types a first class type was the right decision.

LW: Estimates on enough progress to say whether C++ codebase with exception handling. Initial measurements of wasm code size increases?

HA: Not yet, the only measurement that I have is the one from 2 years ago - at that point we were implementing the first proposal, and we only tested a couple of applications, that didn’t show any code size increases. Haven’t tested this proposal, but I don’t think this would incur code size cost. Next time I’ll try to present more statistics on code size

AR: Can you comment on setjump/longjump, how will that work in wasm?

HA: We can see long jump as a throw - currently it’s emulated by emscripten, incurs code size cost and is not very efficient. Plan now is...

AR: So the core point is that it’s really just a throw, with a try-catch on the side to know where you come from?

HA: It would need code transformations. After longjump, we have to go to the code right after set jump, the function is wrapped within a try catch… so that needs some code transformations because of the setjump.. That shouldn’t incur much of a performance cost.

AR: That would presumably a second exception tag, different from C++ exception.

HA: Yes, of course

JT: Is this primarily for intra-function exceptions, or for fully unwinding? What would this be used for where you have the other variety.

HA: Not sure if I understood the question - exception tags can be used for many things, for categorization is one example. Whether you want to test that this is thrown from C++, or other languages. What AR meant.. In case you mix setjmp, longjmp.. It’s safe to use other tags for other use cases.

AR: Another remark -- I have worked with some folks on the side, RE effect handlers. Made a formalization of the exception handling proposal, we have the validationand evaluation rules. I plan to make a PR for that. A student, Daniel Hillerstrom, wanted to prototype effect handlers in reference interpreter. SInce based on exception proposal he started with that, so there is a fairly complete implementation in the reference interpreter. I will ask him to add another PR. He hasn’t done the other part yet, but what we have corresponds nicely.

HA: Is he planning to present at a future CG meeting?

AR: Maybe we should do that, good point. Currently there is no spec text at all.

[Michael presenting]

TODO: slides

MS: Brief update about implementation status in V8 - status outlined in slides. Implementation in the interpreter is in the debugger. Exception types can appear in signatures, locals, globals and partially in tables - we need more tables. try..catch.. and is implemented as zero cost. But there is some overhead of metadata stored on the side. Note: Excpetion packages are allocated on the garbage collected heap.

SLIDE: Interaction with JS

MS: Proposal doesn’t say much about itneration with JS, but needed for testing. Made some assumptions along the way. The exception proposal has the full frame, wasm frames as well as all the interleaving JS frames. Can’t construct a wasm exception from JS. All the JS exceptions can be caught in wasm, they can be rethrown. On the JS side, you can’t distinguish between anyref, and except_ref

AR: In what way would they ever be distinguishable? Wouldn’t you expect that JS can’t do that?

MS: I would expect that on the JS end

LI: Why can’t you branch on JS exceptions?

MS: Currently you can’t import

LI: When we get type imports can we branch on JS exceptions?

MS: Good question..

KM: Do we have a plan with WebIDL for handling DOM exceptions or something along those lines?

MS: I can imagine that coming up. One question I have, what are the values that you expect there? Maybe there are no values. From V8 side I see that doable, extracting values is problematic.

AR: You can catch them, but can’t look and identify them. As a part of the proposal, we also should have a JS API extension to define wasm exceptions..

DS: Could you import an anyref type maybe?

AR: You could import as except_ref tag without any arguments.

KM: It’s like an enumeration value

DE: Is that the only thing that we can do?

MS: JS exceptions are considered a package, you can catch them, rethrow them, but you can’t check them, or you can’t branch off of them

DE: Is there any way we allow you to get the anyref value out from wasm.

MS: except_ref is a subtype of any_ref so you have except_ref...

AR: I think you’re asking that the except_ref would have the anyref.

LW: except_ref is a subtype of any_ref, so it’s is an anyref.. So it the tag matches...

[discussion about except_ref having an anyref]

LW: I see what you mean, yes it’s a JS exception -- but it gives you back the same reference again.

AR: Want to point out that the except_ref is not generally the exception value alone, that’s a JS thing, but I wouldn’t think of them to be the same thing

LW: Right, it’s a wrapper around the exception value.

DE: Can’t picture where you’re getting at ..

MS: Currently it’s implemented similar to what Luke is explaining, but we can move in another direction.

KM: In v8, the allocation is just from WebAssembly.Exception, not the exception itself.

MS: Cach does not allocate, only the throw

JG: Conversion from exception type to value type, but not other way around, 1-direction is identify function for now.

DE: The v8 implementation makes perfect sense… [more discussion]

LI: Any support for getting stack trace out?

MS: No support yet, embedders of the JS can get it from … and print it.

LW: On that topic, do we have to expose the stack? At the moment, we have to expose it in JS. It makes them somewhat expensive, it’d be nice to not do that in wasm. You can get the stack by using JSError object.

KM: Then you have the problem of...

LW: For debug reasons we may want a stack, but that’s different.

KM: Your stack will become a different location from where you threw tghe exception...

LW: If you want to have a stack, you call out to JS to get it first. Then throw that.

KM: I’m thinking more of the traps case, but maybe they’re not different enough from the regular case

LW: Traps are different. Exception objects might be created cheaply.

AR: All of this is part of JS API concern.

LW: I care about not having to allocate stack for all of wasm.

LI: Something that changed recently in Erlang is.. We can have a bit flag that says I want a stack trace

KM: Problem is that there’s code that executes here. Every catch will have to do that.

LW: You don’t really know

LI: It works in Erlang and Elixir, it works for us

BT: For languages that do give stack traces, they have stack traces associated with all exceptions. You want them all the time. It’s relatively expensive in JS because they’re JS objects. I’d rather see engines make that better.

LW: It’s a lot more expensive than if you didn;’t have to at all

BT: If you’re deeper than the handler, then that’s the difference in the cost.

LW: Don’t know - if exceptions are rare and expensive events like traps are, like some languages, if that was the case.. The ML interpreter throws a bunch.

BT: Maybe it depends on what we do with effect handlers in the future. For traps, it’s useful to have a stack trace.

LW: Agreed for traps, this is a JS api detail. The JS api details for this should not expose the tag by default

DE: If all the JS exceptions share a single tag, you may want a different way to get a stack trace and get different tags. Maybe when you create a new exception type… maybe when you import it.

KM: The idea of having different exception types having stack traces is interesting. For example have a stack trace for control flow, but no throw really knows this

LW: It doesn’t have to be in the static type, it could be in the import tag.

BT: There’s an encoding of declaring events, there is an attribute that you can declare and set to 0

TL: Additional use case: implementing builtin return address. Intern porting ubsan, asan, to wasm. Depend on getting stack traces, return addresses for error reporting. Current hack is to call out to JS, parse stack to get function offset, mostly works. It’s interesting -- to get return address you have to call out to embedder, if wasm could access it directly, that’s new information. It’s useful for this.

HA: That implies - if we want to codify the ability of generating stack traces into wasm and don’t rely on the embedder. That has to be a separate proposal from the exceptions proposal because you can have stack traces without exceptions, so not sure that you need to require exception_ref

TL: For this proposal, maybe not.

JT: One performance question: it shouldn’t impact performance of code that doesn’t use exceptions, right?

HA: Code size will be impacted, not sure if there’s a way to do this without code size. This happens in all languages because it has to generate landing pads, and handlers and everything

JT: Right, so this is my concern. Today you can assume that you won’t be unwound. I don’t have to have an incremental cleanup. You can assume that if you’re unwound, you are going to die. You may run into problems with functions that are being unwound but didn’t expect it. I wonder if there’s a way to mark code that doesn’t expect to be unwindable, rather than having landing pads. Can we detect this?

DS: You can still compile.. With fno-exceptions, it won’t put any landing pads.. This is what you’re referring to with the undefined behaviour. What you’re asking for is can you have linked code that can be unbound. Statically we have some ways of telling whether you can link one set of features with some other set of features. Most users can say I don’t want to pay the cost for exceptions even to abort..

JT: To respond to that, statically detecting at link time is nice. But it doesn’t seem hard to have dynamic test. Today there is no support for landing pads. If we have to add that anyway, we can something that marks it directly. Somebody has to mention it explicitly.

MS: Even without the exception proposal, you can be unwound if the embedder throws the exception

JT: But in that case you’ll be unwound and aborted….

AR: That’s not practically a dfiferent thing because the module and the instance still might exist and get reentered. You’d have to kill entire store/isolate to be safe

JG: [example with JS-wasm-JS throwing]

NM: That’s the same state of native code as well, done some research on c++ code, we get really interesting behaviours based on how many layers you build up. It mostly works, but depends on landing pads.. If you’re accepting through code, it mostly works with some undefined behaviour

JT: Mostly just works but some UB is concerning though.

NM: Yes, agreed!

JT: Can we have some way of marking this? So we can detect UB and trap.

AR: What’s the difference? There’s nothing you can do in that case...

JT: Yes, crash it.

TL: Clear case for UBSAN, I think wasm is the runway of abstraction to solve this particular problem

JG: You could also generate a landing pad that aborts.

BT: Our job is to define what wasm VM does. We need to give those languages enough mechanism to detect this. We may find a way to make sure there’s no UB in the machine itself. We could mark code, maybe just wrapping those functions in a catchall.

JT: If we can find a lightweight way of doing that that would work - I just want to make sure that it’s built into the proposal.

HA: Code compiled expecting it’s not unwound shouldn’t be unwound.

[More clarifying discussions - traps and exceptions]

MS: Mentions of traps already, good way - what happens on traps? Currently a try-catch end will catch traps only if it crosses function invocations. Opens up the discussion to what are the interactions with traps

AR: I have a simple answer -- this proposal should be separated from treating traps as exceptions. We can add a separate proposal for treating traps as exceptions. So we can handle things we just discussed.

:Did we decide that an uncaught exception turns into a trap?

AR: This is for the embedder to decide

HA: If we have a separate proposal that only depends on traps, but if we ever promote or convert between traps and exceptions, maybe it’s not separate from this?

AR: We have to decide if catch is separate..

HA: One option is leave traps as traps. Other is to treat traps as exceptions.

BT: It does make a difference, if you have a program with an infinite loop in a catch all, and you have a throw in a try, does it throw or not? The proposal will have to specify this case

AR: I retract my earlier remark.

LI: We may want traps to be caught, in case of shared nothing I’m assuming that the trap can go across the two modules. You don’t trust that code, you don’t want divide by zero to kill your module, so you want to catch it.

AR: It’s a buggy program

LI: Yeah, but you didn’t trust, so you may just want to run it with different arguments or something.

DS: An argument for why traps should not be caught by exceptions - we’ve been working under the assumption that it should be straightforward to remove restrictions … if we allow catches to catch traps, and then it’s observable inside wasm and it’s harder to change

BT: I’ve been thinking about that too -- in original floating-point conversions, we didn’t ...

JG: Didn’t we do that already today? Float to int conversions, we did that to relax restrictions but we didn’t end up doing that.

DG: We took a poll.

BT: We could have done multiple things, AFAICT there were two sane ways to do it, and now we have both.

AR: The argumetn at the time was that they might be useful to some people for debugging purposes, not convinced that it was a good argument

DE: One compat constraint -- wasm used through the JS API, some compat constraints with web. I don’t think there’s a fundamental difference.

AR: Even JS reserves the ability to change errors intonon-errors, could never change the langauge otherwise

DE: We can’t make these absolute statements, we assume that’s a possible extension point. Making traps catchable within wasm doesn’t really change this calculation.

BT: I recall now -- since it was a year since MVP there were programs that use the trapping behavior as safety mechanism. I can see the same argument cropping up again, programs depending on trapping behavior, they’ll run beyond safety mechanisms.

JG: For a catchable trap, that still is a control flow edge, so they would be using it anyway, even if they didn’t they would be using it as a guard mechanism - in the FP case..

DE: Can’t make absolute statements here … it’s all pragmatic. HA: If we make traps catchable, does that mean it’s breaking the current trapping mechanism?

BT: My point was that in the original design process, we thought by adding cases we didn’t know what to do -- making them trap, pragmatically that’s not the case. Future evolution is that things we define to trap, will continue to trap.

DE: Also too strong a statement, there are also programs that won’t validate more

BT: Validation is different than execution.

LW: We also tweak execution too -- we have to decide whether it will break the web. Don’t break the web. Or anybody else. Maybe we can have them catchable, kind of like JS exceptions. They can be caught, they can’t be identified. We should decide this part now.

HA: What you’re suggesting is make it catchable now, figure out what to do with it later. That involves code size increases too. Two kinds of cases we generate landing pads, when user uses catch. Another is when we run destructors. When you compile with exceptions enabled, you have stack allocated objects, you create landing pads.

LW: Not arguing with that.

DS: At every catch clause, you have to distinguish if this is an exception or trap.

LW: traps are no different than JS exceptions.

JT: Other platforms have this issue too. SEH and exceptions, interesting interactions that happen when try-catch blocks assume that they can catch everything.

[back to slides]

DE: WebAssembly.RuntimeError shouldn’t be used then.

MS: Agree.

HA: Not sure what’s useful about this. Does not mean language semantics catches.

MS: Agree to postpone, needs more discussion

Separable debug info
Source maps

TODO: slides

Luke Imhoff presenting

DS: I’ll tell you what we’ve already got, and what we can add. Today we have 3 different types of info. We have name section => custom section. It’s a privileged custom section, most engines will use it.

DS: Additionally emscripten can generate source maps from when you compile - source maps are privileged in a different way - Browsers know how to load them - here’s a URL and browsers can read source maps - it can give you a backtrace with source on it. Not sure if it does that in browsers right now.

DS: Third kind is DWARF, today LLVM has some support for producing DWARF debug info, one type of which is line table which is same info as source map. Today, emscripten can turn dwarf line table into source map. Browsers don’t know anything about DWARF today.

DS: It is possible to do more or less what you have said except for number 4. Although with source maps you can do 4. If you put source maps on your sever you can do this. Dwarf source maps only have line number info.

DS: There are drawbacks, line numbers and source maps don’t give you much today. No full debugger info today. Plan to work on more, but I don’t know whether there is a concrete proposal for this or JS embedding.

LI: Source maps - work has progressed in the community and then stalled- open PR for the last two years, firefox has them, rust had them but then they got removed, Binaryen has them - Elixir etc - good tools are something we’re known for. We want to be consistent./ If you run in Canary or FF nightly, it’ll work but it can also just break suddenly.

TS: We can say, for source maps we didn’t focus on supporting them further. They are inherently limited, not a path we want to go down. I would strongly recommend that you don’t invest too much here. Nick’s slides may address some things here. Some tooling agreements, and on the standards side to support this.

LI: Step beyond source maps is that - maybe we can’t support Haskell, Erlang, Elixir because it’s too different from other languages. Maybe there’s a headless version that we can make work so you don’t have to worry about having devtools support, but have have an extension

DS: Yeah, related to second point. There is devtools protocol that is somewhat standardized, Chrome and other browsers have partial support. Supporting some primitives there might be best. You’d want debug info - but you’d want to standardize it anyway.

Nick Fitzgerald presenting

Slides

J_: Confused, is this something that takes place at execution time? Adds overhead right?

NF: Yes. If you are not debugging, you wouldn’t have the overhead. For FF, if you want to debug you have to have devtools open - you have to opt into enabling debug.

[back to slides]

JG: How is this different from devtools or language server protocol?

NF: Same thing but less, not talking about the network, not about serialization.

DS: I think this is … abstract operations, it’s what I have in mind also. You’ve got a JS interface here, but you can imagine something like debugger wire protocol.

JG: Interpreted that as webIDL bindings protocol

DS: This interface is so abstract, it’s going to be hard to reason about - you can think of this as I’m calling a function - but also it could be an rpc, or a debug stub call - is this intentional?

NF: I think we do want to have -- here is the abstract operation, what it means to set a breakpoint. DWARF is lacking that. It manifests as people looking at GCC and GDB, and copying behavior from there. SetBreakpoint doesn’t return anything, if there is return values, may be async -- playing nice with networks. Allows concurrency.

LI: Separate by line is not enough, C++, Java and other languages let you break on expressions.

NM: You said language server and DWARF, you’ll probably need those under the hood anyway? Reality is that every compiler generates it anyway...

NF: Could be doing pdb

NM Lot of stuff is going to be punted to language specific things - debuggers need to have source specific information - we can have modules that support different languages, maybe that works but having trouble seeing the bigger picture.

NF: If you have a frame of C++ then Rust, then Blazor, then you need to associate each frame with its function, and its debug info. Not sure...

NM: If we don’t associate languages with specific things, how do we know how to differentiate - FF would have to understand pdb, or whatever

NF: Explicit goal that we don’t have to understand DWARF or PDB itself. User representation that is interpreted.

NM: Where does that get displayed?

NF: In devtools.. There is an implementation in browsers but it’s not well separated.

LI: Anything other than breakpoints?

NF: I wrote some stuff up before, just motivation and use case. Lots of ideas, but want to make sure all the things are in scope- need agreement on goals. [https://github.com/fitzgen/wasm-debugging-capabilities/]

LI: Goal is to not have to write an extension of every browser

TS: That is the goal, leave large amount of the complexity to run inside the sandbox(?)

Allows you to handle this in various languages like C++, but also in Blazor with C#.

DS: Blazor currently intercepts debug protocol.

TS: Same approach, standardize just enough to abstract away complexity.

DS: Agree with it 100%. Bit of a concrete concern - goes back to LI’s concern - we will have this abstract interface description but it’s possible for different browsers to interpret this differently. DEscribed some valid interpretations, this is a great place to start - I also want to make it possible to compile lldb to natiive

TS: That works now

BS: Great place to start a conversation - we can move to the debugging subgroup. Call a meeting of a debugging subgroup - table this for now.

TODO: DS to link requirements doc. Doc! https://github.com/fitzgen/wasm-debugging-capabilities/

Split typed imports and exports out from GC proposal

AR: General idea: defining types, not just function types. One thing that is added with ref types proposal, ability to pass references between wasm and embedder.

AR: Main idea is that you can import type definition, whoever links the module can define what that is.

AR: Essentially allow module to introduce a new reference types, that doesn’t know what it is. In particular you can always have references…. Validation will ensure you can get Dom objects. Maybe wasi is a better use case.

AR: You can also export. That’s the basic idea already. Beyond just anyref. Additional thing, they’re not completely abstract, has to be subtype of something else. But subtyping not that interesting yet.

LI: How does this interact with JS proxy?

AR: Purely static mechanism, just data flow at compile time. Host can rely on passed something of right shape. Whether that is distinguished in class is orthogonal.

FM: How is it that you don’t imply type inference at instantiation time. Does the engine have to do type unification.

AR: No. This is basically a new abstract type from the type checking perspective.

FM: It has to flow through the network of function types.

AR: Any type is distinct, no place in validation to unify. Just a new type name.

FM: If you have a generic function?

AR: Even if we had, it wouldn’t be different. Generic function is typed by substitution, not unification.

FM: That is the heart of unification, seems to imply a lot more complexity.

AR: Completely orthogonal, substitution happens for existing types as well.

FM: True.

AK: You said the DOM wasn’t a good example of this, why?

AR: DOM is bad example, it is JS thing, no static information we want to enforce.

LW:If we’re talking about a DOM node, there is information about - there is a real something behind a JS object

AR: You could imagine that we design an interface to DOM, that is more typed.

LW: new get-originals is like that.

AR: Consider this less interesting because you’re passing it back to a dynamic language anyway.

LW: Passing a DOM object to a DOM method.

AR: If the other side can do smart optimizations then I can see that - thing I’m calling out to doesn’t need to implement runtime checks.

BT: What about splitting out constraints?

AR: Not relevant to this proposal.

BT: Might be useful for host types that are related to each other that there is a subtyping relationship.

BS: Concretely we’re talking about splitting out - are we talking about splitting it out? Or what it actually is?

LW: Question is about splitting it out, what it is has been discussed, two use cases are to use it ahead of the full GC proposal is WASI, and the GC proposal.

BS: Does Wasi and WebIDL binding agree that this needs to be split out?

[General agreement]

BS: There seems to be general agreement, so we should just be able to take a pre-proposal phase.

DE: There could be a way in the JSAPI to make a new nominal type - all the nominal types in the web case would come in with built in types?

AR: So far we have reflected everything that can be imported or exported and provided ways to create them, no strong opinions. More reelvant for C API.

LW: In the GC proposal, we wrote a JS part that describes how this might work. Until we have those types, I’m not sure what we would construct.

JG: I don’t imagine before we get the full GC that we get full JS types.

[Discussion about JS/C API and types]

FM: Does this imply the imported type is a reference type?

AR: Yes, You would import a DOM object as a type, and what you use is a ref of that type.

FM: C won’t make use of nominal type imports

BS: Discuss in break out?

FM: Reason for raising it is to define the scope if the proposal solves a real problem.

FM: Two specific things - people and addresses, doesn’t apply to C. The branding question isn’t solved by this. Raising design questions to determine the scope.

BS: Interesting to split this out - something like this would be useful before the full GC proposal

BT: Small risk in forward compatibility - so that we don’t specify something to specific, want to make sure that we leave room for design constraints that might come up for GC.

??: Maybe we should have a new rule, that there must be a zero byte somewhere (to allow for future extension).

JT: Or at least a zero bit.

Decision: consensus to move to stage 1 on splitting typed imports out of GC

Discussion: frequency of video meetings? (Sometimes they’re fast, sometimes not. It’s burdensome to attend if nothing happens -- perhaps we should defer topics if there’s just one that will be < 30 min -- perhaps don’t allow late additions to the agenda)

BS: Cadence is every two weeks - burdensome to attend when there’s only one agenda item. Defer topics if there’s at least 30 mins.

DS: Get agenda item 24 hours in attendance.

BT: Let’s look at data

??: Would be sad to go to monthly meetings by default though; that would slow things down.

KM: Looked through the notes from the ten CG meetings this year. Of those ten, we had four short meetings.

An update on Ben

Ben has been accepted to an internal Google program where he’ll be teaching, which will be a full-time responsibility, so he’s delegating responsibilities for the CG.

Adjourn