From Rust to beyond: The WebAssembly galaxy

This blog post is part of a series explaining how to send Rust beyond earth, into many different galaxies:


The first galaxy that our Rust parser will explore is the WebAssembly (WASM) galaxy. This post will explain what WebAssembly is, how to compile the parser into WebAssembly, and how to use the WebAssembly binary with Javascript in a browser and with NodeJS.

What is WebAssembly, and why?

If you already know WebAssembly, you can skip this section.

WebAssembly defines itself as:

WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications.

Should I say more? Probably, yes…

WebAssembly is a new portable binary format. Languages like C, C++, or Rust already compiles to this target. It is the spirit successor of ASM.js. By spirit successor, I mean it is the same people trying to extend the Web platform and to make the Web fast that are working on both technologies. They share some design concepts too, but that’s not really important right now.

Before WebAssembly, programs had to compile to Javascript in order to run on the Web platform. The resulting files were most of the time large. And because the Web is a network, the files had to be downloaded, and it took time. WebAssembly is designed to be encoded in a size- and load-time efficient binary format.

WebAssembly is also faster than Javascript for many reasons. Despites all the crazy optimisations engineers put in the Javascript virtual machines, Javascript is a weakly and dynamically typed language, which requires to be interpreted. WebAssembly aims to execute at native speed by taking advantage of common hardware capabilities. WebAssembly also loads faster than Javascript because parsing and compiling happen while the binary is streamed from the network. So once the binary is entirely fetched, it is ready to run: No need to wait on the parser and the compiler before running the program.

Today, and our blog series is a perfect example of that, it is possible to write a Rust program, and to compile it to run on the Web platform. Why? Because WebAssembly is implemented by all major browsers, and because it has been designed for the Web: To live and run on the Web platform (like a browser). But its portable aspect and its safe and sandboxed memory design make it a good candidate to run outside of the Web platform (see a serverless WASM framework, or an application container built for WASM).

I think it is important to remind that WebAssembly is not here to replace Javascript. It is just another technology which solves many problems we can meet today, like load-time, safety, or speed.

Rust 🚀 WebAssembly

Rust to WASM

The Rust WASM team is a group of people leading the effort of pushing Rust into WebAssembly with a set of tools and integrations. There is a book explaining how to write a WebAssembly program with Rust.

With the Gutenberg Rust parser, I didn’t use tools like wasm-bindgen (which is a pure gem) when I started the project few months ago because I hit some limitations. Note that some of them have been addressed since then! Anyway, we will do most of the work by hand, and I think this is an excellent way to understand how things work in the background. When you are familiar with WebAssembly interactions, then wasm-bindgen is an excellent tool to have within easy reach, because it abstracts all the interactions and let you focus on your code logic instead.

I would like to remind the reader that the Gutenberg Rust parser exposes one AST, and one root function (the axiom of the grammar), respectively defined as:

pub enum Node<'a> {
    Block {
        name: (Input<'a>, Input<'a>),
        attributes: Option<Input<'a>>,
        children: Vec<Node<'a>>
    },
    Phrase(Input<'a>)
}

and

pub fn root(
    input: Input
) -> Result<(Input, Vec<ast::Node>), nom::Err<Input>>;

Knowing that, let’s go!

General design

Here is our general design or workflow:

  1. Javascript (for instance) writes the blog post to parse into the WebAssembly module memory,
  2. Javascript runs the root function by passing a pointer to the memory, and the length of the blog post,
  3. Rust reads the blog post from the memory, runs the Gutenberg parser, compiles the resulting AST into a sequence of bytes, and returns the pointer to this sequence of bytes to Javascript,
  4. Javascript reads the memory from the received pointer, and decodes the sequence of bytes as Javascript objects in order to recreate an AST with a friendly API.

Why a sequence of bytes? Because WebAssembly only supports integers and floats, not strings or vectors, and also because our Rust parser takes a slice of bytes as input, so this is handy.

We use the term boundary layer to refer to this Javascript piece of code responsible to read from and write into the WebAssembly module memory, and responsible of exposing a friendly API.

Now, we will focus on the Rust code. It consists of only 4 functions:

  • alloc to allocate memory (exported),
  • dealloc to deallocate memory (exported),
  • root to run the parser (exported),
  • into_bytes to transform the AST into a sequence of bytes.

The entire code lands here. It is approximately 150 lines of code. We explain it.

Memory allocation

Let’s start by the memory allocator. I choose to use wee_alloc for the memory allocator. It is specifically designed for WebAssembly by being very small (less than a kilobyte) and efficient.

The following piece of code describes the memory allocator setup and the “prelude” for our code (enabling some compiler features, like alloc, declaring external crates, some aliases, and declaring required function like panic, oom etc.). This can be considered as a boilerplate:

#![no_std]
#![feature(
    alloc,
    alloc_error_handler,
    core_intrinsics,
    lang_items
)]

extern crate gutenberg_post_parser;
extern crate wee_alloc;
#[macro_use] extern crate alloc;

use gutenberg_post_parser::ast::Node;
use alloc::vec::Vec;
use core::{mem, slice};

#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;

#[panic_handler]
fn panic(_info: &core::panic::PanicInfo) -> ! {
    unsafe { core::intrinsics::abort(); }
}

#[alloc_error_handler]
fn oom(_: core::alloc::Layout) -> ! {
    unsafe { core::intrinsics::abort(); }
}

// This is the definition of `std::ffi::c_void`, but WASM runs without std in our case.
#[repr(u8)]
#[allow(non_camel_case_types)]
pub enum c_void {
    #[doc(hidden)]
    __variant1,

    #[doc(hidden)]
    __variant2
}

The Rust memory is the WebAssembly memory. Rust will allocate and deallocate memory on its own, but Javascript for instance needs to allocate and deallocate WebAssembly memory in order to communicate/exchange data. So we need to export one function to allocate memory and one function to deallocate memory.

Once again, this is almost a boilerplate. The alloc function creates an empty vector of a specific capacity (because it is a linear segment of memory), and returns a pointer to this empty vector:

#[no_mangle]
pub extern "C" fn alloc(capacity: usize) -> *mut c_void {
    let mut buffer = Vec::with_capacity(capacity);
    let pointer = buffer.as_mut_ptr();
    mem::forget(buffer);

    pointer as *mut c_void
}

Note the #[no_mangle] attribute that instructs the Rust compiler to not mangle the function name, i.e. to not rename it. And extern "C" to export the function in the WebAssembly module, so it is “public” from outside the WebAssembly binary.

The code is pretty straightforward and matches what we announced earlier: A Vec is allocated with a specific capacity, and the pointer to this vector is returned. The important part is mem::forget(buffer). It is required so that Rust will not deallocate the vector once it goes out of scope. Indeed, Rust enforces Resource Acquisition Is Initialization (RAII), so whenever an object goes out of scope, its destructor is called and its owned resources are freed. This behavior shields against resource leaks bugs, and this is why we will never have to manually free memory or worry about memory leaks in Rust (see some RAII examples). In this case, we want to allocate and keep the allocation after the function execution, hence the mem::forget call.

Let’s jump on the dealloc function. The goal is to recreate a vector based on a pointer and a capacity, and to let Rust drops it:

#[no_mangle]
pub extern "C" fn dealloc(pointer: *mut c_void, capacity: usize) {
    unsafe {
        let _ = Vec::from_raw_parts(pointer, 0, capacity);
    }
}

The Vec::from_raw_parts function is marked as unsafe, so we need to delimit it in an unsafe block so that the dealloc function is considered as safe.

The variable _ contains our data to deallocate, and it goes out of scope immediately, so Rust drops it.

From input to a flat AST

Now the core of the binding! The root function reads the blog post to parse based on a pointer and a length, then it parses it. If the result is OK, it serializes the AST into a sequence of bytes, i.e. it flatten it, otherwise it returns an empty sequence of bytes.

Flatten AST
The logic flow of the parser: The input on the left is parsed into an AST, which is serialized into a flat sequence of bytes on the right.
#[no_mangle]
pub extern "C" fn root(pointer: *mut u8, length: usize) -> *mut u8 {
    let input = unsafe { slice::from_raw_parts(pointer, length) };
    let mut output = vec![];

    if let Ok((_remaining, nodes)) = gutenberg_post_parser::root(input) {
        // Compile the AST (nodes) into a sequence of bytes.
    }

    let pointer = output.as_mut_ptr();
    mem::forget(output);

    pointer
}

The variable input contains the blog post. It is fetched from memory with a pointer and a length. The variable output is the sequence of bytes the function will return. gutenberg_post_parser::root(input) runs the parser. If parsing is OK, then the nodes are compiled into a sequence of bytes (omitted for now). Then the pointer to the sequence of bytes is grabbed, the Rust compiler is instructed to not drop it, and finally the pointer is returned. The logic is again pretty straightforward.

Now, let’s focus on the AST to the sequence of bytes (u8) compilation. All data the AST hold are already bytes, which makes the process easier. The goal is only to flatten the AST:

  • The first 4 bytes represent the number of nodes at the first level (4 × u8 represents u32) ,
  • Next, if the node is Block:
    • The first byte is the node type: 1u8 for a block,
    • The second byte is the size of the block name,
    • The third to the sixth bytes are the size of the attributes,
    • The seventh byte is the number of node children the block has,
    • Next bytes are the block name,
    • Next bytes are the attributes (&b"null"[..] if none),
    • Next bytes are node children as a sequence of bytes,
  • Next, if the node is Phrase:
    • The first byte is the node type: 2u8 for a phrase,
    • The second to the fifth bytes are the size of the phrase,
    • Next bytes are the phrase.

Here is the missing part of the root function:

if let Ok((_remaining, nodes)) = gutenberg_post_parser::root(input) {
    let nodes_length = u32_to_u8s(nodes.len() as u32);

    output.push(nodes_length.0);
    output.push(nodes_length.1);
    output.push(nodes_length.2);
    output.push(nodes_length.3);

    for node in nodes {
        into_bytes(&node, &mut output);
    }
}

And here is the into_bytes function:

fn into_bytes<'a>(node: &Node<'a>, output: &mut Vec<u8>) {
    match *node {
        Node::Block { name, attributes, ref children } => {
            let node_type = 1u8;
            let name_length = name.0.len() + name.1.len() + 1;
            let attributes_length = match attributes {
                Some(attributes) => attributes.len(),
                None => 4
            };
            let attributes_length_as_u8s = u32_to_u8s(attributes_length as u32);

            let number_of_children = children.len();
            output.push(node_type);
            output.push(name_length as u8);
            output.push(attributes_length_as_u8s.0);
            output.push(attributes_length_as_u8s.1);
            output.push(attributes_length_as_u8s.2);
            output.push(attributes_length_as_u8s.3);
            output.push(number_of_children as u8);

            output.extend(name.0);
            output.push(b'/');
            output.extend(name.1);

            if let Some(attributes) = attributes {
                output.extend(attributes);
            } else {
                output.extend(&b"null"[..]);
            }

            for child in children {
                into_bytes(&child, output);
            }
        },

        Node::Phrase(phrase) => {
            let node_type = 2u8;
            let phrase_length = phrase.len();

            output.push(node_type);

            let phrase_length_as_u8s = u32_to_u8s(phrase_length as u32);

            output.push(phrase_length_as_u8s.0);
            output.push(phrase_length_as_u8s.1);
            output.push(phrase_length_as_u8s.2);
            output.push(phrase_length_as_u8s.3);
            output.extend(phrase);
        }
    }
}

What I find interesting with this code is it reads just like the bullet list above the code.

For the most curious, here is the u32_to_u8s function:

fn u32_to_u8s(x: u32) -> (u8, u8, u8, u8) {
    (
        ((x >> 24) & 0xff) as u8,
        ((x >> 16) & 0xff) as u8,
        ((x >> 8)  & 0xff) as u8,
        ( x        & 0xff) as u8
    )
}

Here we are. alloc, dealloc, root, and into_bytes. Four functions, and everything is done.

Producing and optimising the WebAssembly binary

To get a WebAssembly binary, the project has to be compiled to the wasm32-unknown-unknown target. For now (and it will change in a near future), the nightly toolchain is needed to compile the project, so make sure you have the latest nightly version of rustc & co. installed with rustup update nightly. Let’s run cargo:

$ RUSTFLAGS='-g' cargo +nightly build --target wasm32-unknown-unknown --release

The WebAssembly binary weights 22kb. Our goal is to reduce the file size. For that, the following tools will be required:

  • wasm-gc to garbage-collect unused imports, internal functions, types etc.,
  • wasm-snip to mark some functions as unreachable, this is useful when the binary includes unused code that the linker were not able to remove,
  • wasm-opt from the Binaryen project, to optimise the binary,
  • gzip and brotli to compress the binary.

Basically, what we do is the following:

$ # Garbage-collect unused data.
$ wasm-gc gutenberg_post_parser.wasm

$ # Mark fmt and panicking as unreachable.
$ wasm-snip --snip-rust-fmt-code --snip-rust-panicking-code gutenberg_post_parser.wasm -o gutenberg_post_parser_snipped.wasm
$ mv gutenberg_post_parser_snipped.wasm gutenberg_post_parser.wasm

$ # Garbage-collect unreachable data.
$ wasm-gc gutenberg_post_parser.wasm

$ # Optimise for small size.
$ wasm-opt -Oz -o gutenberg_post_parser_opt.wasm gutenberg_post_parser.wasm
$ mv gutenberg_post_parser_opt.wasm gutenberg_post_parser.wasm

$ # Compress.
$ gzip --best --stdout gutenberg_post_parser.wasm > gutenberg_post_parser.wasm.gz
$ brotli --best --stdout --lgwin=24 gutenberg_post_parser.wasm > gutenberg_post_parser.wasm.br 

We end up with the following file sizes:

  • .wasm: 16kb,
  • .wasm.gz: 7.3kb,
  • .wasm.br: 6.2kb.

Neat! Brotli is implemented by most browsers, so when the client sends Accept-Encoding: br, the server can response with the .wasm.br file.

To give you a feeling of what 6.2kb represent, the following image also weights 6.2kb:

1398208027wordpress-logo-simplified-rgb

The WebAssembly binary is ready to run!

WebAssembly 🚀 Javascript

WASM to JS

In this section, we assume Javascript runs in a browser. Thus, what we need to do is the following:

  1. Load/stream and instanciate the WebAssembly binary,
  2. Write the blog post to parse in the WebAssembly module memory,
  3. Call the root function on the parser,
  4. Read the WebAssembly module memory to load the flat AST (a sequence of bytes) and decode it to build a “Javascript AST” (with our own objects).

The entire code lands here. It is approximately 150 lines of code too. I won’t explain the whole code since some parts of it is the “friendly API” that is exposed to the user. So I will rather explain the major pieces.

Loading/streaming and instanciating

The WebAssembly API exposes multiple ways to load a WebAssembly binary. The best you can use is the WebAssembly.instanciateStreaming function: It streams the binary and compiles it in the same time, nothing is blocking. This API relies on the Fetch API. You might have guessed it: It is asynchronous (it returns a promise). WebAssembly itself is not asynchronous (except if you use thread), but the instanciation step is. It is possible to avoid that, but this is tricky, and Google Chrome has a strong limit of 4kb for the binary size which will make you give up quickly.

To be able to stream the WebAssembly binary, the server must send the application/wasm MIME type (with the Content-Type header).

Let’s instanciate our WebAssembly:

const url = '/gutenberg_post_parser.wasm';
const wasm =
    WebAssembly.
        instantiateStreaming(fetch(url), {}).
        then(object => object.instance).
        then(instance => { /* step 2 */ });

The WebAssembly binary has been instanciated! Now we can move to the next step.

Last polish before running the parser

Remember that the WebAssembly binary exports 3 functions: alloc, dealloc, and root. They can be found on the exports property, along with the memory. Let’s write that:

        then(instance => {
            const Module = {
                alloc: instance.exports.alloc,
                dealloc: instance.exports.dealloc,
                root: instance.exports.root,
                memory: instance.exports.memory
            };

            runParser(Module, '<!-- wp:foo /-->xyz');
        });

Great, everything is ready to write the runParser function!

The parser runner

As a reminder, this function has to: Write the input (the blog post to parse) in the WebAssembly module memory (Module.memory), to call the root function (Module.root), and to read the result from the WebAssembly module memory. Let’s do that:

function runParser(Module, raw_input) {
    const input = new TextEncoder().encode(raw_input);
    const input_pointer = writeBuffer(Module, input);
    const output_pointer = Module.root(input_pointer, input.length);
    const result = readNodes(Module, output_pointer);

    Module.dealloc(input_pointer, input.length);

    return result;
}

In details:

  • The raw_input is encoded into a sequence of bytes with the TextEncoderAPI, in input,
  • The input is written into the WebAssembly memory module with writeBuffer and its pointer is returned,
  • Then the root function is called with the pointer to the input and the length of the input as expected, and the pointer to the output is returned,
  • Then the output is decoded,
  • And finally, the input is deallocated. The output of the parser will be deallocated in the readNodes function because its length is unknown at this step.

Great! So we have 2 functions to write right now: writeBuffer​ and readNodes.

Writing the data in memory

Let’s go with the first one, writeBuffer:

function writeBuffer(Module, buffer) {
    const buffer_length = buffer.length;
    const pointer = Module.alloc(buffer_length);
    const memory = new Uint8Array(Module.memory.buffer);

    for (let i = 0; i < buffer_length; ++i) {
        memory[pointer + i] = buffer[i];
    }

    return pointer;
}

In details:

  • The length of the buffer is read in buffer_length,
  • A space in memory is allocated to write the buffer,
  • Then a uint8 view of the buffer is instanciated, which means that the buffer will be viewed as a sequence of u8, exactly what Rust expects,
  • Finally the buffer is copied into the memory with a loop, that’s very basic, and return the pointer.

Note that, unlike C strings, adding a NUL byte at the end is not mandatory. This is just the raw data (on the Rust side, we read it with slice::from_raw_parts, slice is a very simple structure).

Reading the output of the parser

So at this step, the input has been written in memory, and the root function has been called so it means the parser has run. It has returned a pointer to the output (the result) and we now have to read it and decode it.

Remind that the first 4 bytes encodes the number of nodes we have to read. Let’s go!

function readNodes(Module, start_pointer) {
    const buffer = new Uint8Array(Module.memory.buffer.slice(start_pointer));
    const number_of_nodes = u8s_to_u32(buffer[0], buffer[1], buffer[2], buffer[3]);

    if (0 >= number_of_nodes) {
        return null;
    }

    const nodes = [];
    let offset = 4;
    let end_offset;

    for (let i = 0; i < number_of_nodes; ++i) {
        const last_offset = readNode(buffer, offset, nodes);

        offset = end_offset = last_offset;
    }

    Module.dealloc(start_pointer, start_pointer + end_offset);

    return nodes;
}

In details:

  • A uint8 view of the memory is instanciated… more precisely: A slice of the memory starting at start_pointer,
  • The number of nodes is read, then all nodes are read,
  • And finally, the output of the parser is deallocated.

For the record, here is the u8s_to_u32 function, this is the exact opposite of u32_to_u8s:

function u8s_to_u32(o, p, q, r) {
    return (o << 24) | (p << 16) | (q << 8) | r;
}

And I will also share the readNode function, but I won’t explain the details. This is just the decoding part of the output from the parser.

function readNode(buffer, offset, nodes) {
    const node_type = buffer[offset];

    // Block.
    if (1 === node_type) {
        const name_length = buffer[offset + 1];
        const attributes_length = u8s_to_u32(buffer[offset + 2], buffer[offset + 3], buffer[offset + 4], buffer[offset + 5]);
        const number_of_children = buffer[offset + 6];

        let payload_offset = offset + 7;
        let next_payload_offset = payload_offset + name_length;

        const name = new TextDecoder().decode(buffer.slice(payload_offset, next_payload_offset));

        payload_offset = next_payload_offset;
        next_payload_offset += attributes_length;

        const attributes = JSON.parse(new TextDecoder().decode(buffer.slice(payload_offset, next_payload_offset)));

        payload_offset = next_payload_offset;
        let end_offset = payload_offset;

        const children = [];

        for (let i = 0; i < number_of_children; ++i) {
            const last_offset = readNode(buffer, payload_offset, children);

            payload_offset = end_offset = last_offset;
        }

        nodes.push(new Block(name, attributes, children));

        return end_offset;
    }
    // Phrase.
    else if (2 === node_type) {
        const phrase_length = u8s_to_u32(buffer[offset + 1], buffer[offset + 2], buffer[offset + 3], buffer[offset + 4]);
        const phrase_offset = offset + 5;
        const phrase = new TextDecoder().decode(buffer.slice(phrase_offset, phrase_offset + phrase_length));

        nodes.push(new Phrase(phrase));

        return phrase_offset + phrase_length;
    } else {
        console.error('unknown node type', node_type);
    }
}

Note that this code is pretty simple and easy to optimise by the Javascript virtual machine. It is almost important to note that this is not the original code. The original version is a little more optimised here and there, but they are very close.

And that’s all! We have successfully read and decoded the output of the parser! We just need to write the Block and Phrase classes like this:

class Block {
    constructor(name, attributes, children) {
        this.name = name;
        this.attributes = attributes;
        this.children = children;
    }
}

class Phrase {
    constructor(phrase) {
        this.phrase = phrase;
    }
}

The final output will be an array of those objects. Easy!

WebAssembly 🚀 NodeJS

WASM to NodeJS

The differences between the Javascript version and the NodeJS version are few:

  • The Fetch API does not exist in NodeJS, so the WebAssembly binary has to be instanciated with a buffer directly, like this: WebAssembly.instantiate(fs.readFileSync(url), {}),
  • The TextEncoder and TextDecoder objects do not exist as global objects, they are in util.TextEncoder and util.TextDecoder.

In order to share the code between both environments, it is possible to write the boundary layer (the Javascript code we wrote) in a .mjs file, aka ECMAScript Module. It allows to write something like import { Gutenberg_Post_Parser } from './gutenberg_post_parser.mjs' for example (considering the whole code we wrote before is a class). On the browser side, the script must be loaded with<script type="module" src="…" />, and on the NodeJS side, node must run with the --experimental-modules flag. I can recommend you this talk Please wait… loading: a tale of two loaders by Myles Borins at the JSConf EU 2018 to understand all the story about that.

The entire code lands here.

Conclusion

We have seen in details how to write a real world parser in Rust, how to compile it into a WebAssembly binary, and how to use it with Javascript and with NodeJS.

The parser can be used in a browser with regular Javascript code, or as a CLI with NodeJS, or on any platforms NodeJS supports.

The Rust part for WebAssembly plus the Javascript part totals 313 lines of code. This is a tiny surface of code to review and to maintain compared to writing a Javascript parser from scratch.

Another argument is the safety and performance. Rust is memory safe, we know that. It is also performant, but is it still true for the WebAssembly target? The following table shows the benchmark results of the actual Javascript parser for the Gutenberg project (implemented with PEG.js), against this project: The Rust parser as a WebAssembly binary.

Javascript parser (ms) Rust parser as a WebAssembly binary (ms) speedup
demo-post.html 13.167 0.252 × 52
shortcode-shortcomings.html 26.784 0.271 × 98
redesigning-chrome-desktop.html 75.500 0.918 × 82
web-at-maximum-fps.html 88.118 0.901 × 98
early-adopting-the-future.html 201.011 3.329 × 60
pygmalian-raw-html.html 311.416 2.692 × 116
moby-dick-parsed.html 2,466.533 25.14 × 98

The WebAssembly binary is in average 86 times faster than the actual Javascript implementation. The median of the speedup is 98. Some edge cases are very interesting, like moby-dick-parsed.html where it takes 2.5s with the Javascript parser against 25ms with WebAssembly.

So not only it is safer, but it is faster than Javascript in this case. And it is only 300 lines of code.

Note that WebAssembly does not support SIMD yet: It is still a proposal. Rust is gently supporting it (example with PR #549). It will dramatically improve the performances!

We will see in the next episodes of this series that Rust can reach a lot of galaxies, and the more it travels, the more it gets interesting.

Thanks for reading!

From Rust to beyond: Prelude

At my work, I had an opportunity to start an experiment: Writing a single parser implementation in Rust for the new Gutenberg post format, bound to many platforms and environments.

gutenberg_logo
The logo of the Gutenberg post parser project.

This series of posts is about those bindings, and explains how to send Rust beyond earth, into many different galaxies. Rust will land in:

The ship is currently flying into the Java galaxy, this series may continue if the ship does not crash or has enough resources to survive!

The Gutenberg post format

Let’s introduce quickly what Gutenberg is, and why a new post format. If you want an in-depth presentation, I highly recommend to read The Language of Gutenberg. Note that this is not required for the reader to understand the Gutenberg post format.

Gutenberg is the next WordPress editor. It is a little revolution on its own. The features it unlocks are very powerful.

The editor will create a new page- and post-building experience that makes writing rich posts effortless, and has “blocks” to make it easy what today might take shortcodes, custom HTML, or “mystery meat” embed discovery. — Matt Mullenweg

The format of a blog post was HTML. And it continues to be. However, another semantics layer is added through annotations. Annotations are written in comments and borrow the XML syntax, e.g.:

<!-- wp:ns/block-name {"attributes": "as JSON"} -->
    <p>phrase</p>
<!-- /wp:ns/block-name -->

The Gutenberg format provides 2 constructions: Block, and Phrase. The example above contains both: There is a block wrapping a phrase. A phrase is basically anything that is not a block. Let’s describe the example:

  • It starts with an annotation (<!-- … -->),
  • The wp: is mandatory to represent a Gutenberg block,
  • It is followed by a fully qualified block name, which is a pair of an optional namespace (here sets to ns, defaults to core) and a block name (here sets to block-name), separated by a slash,
  • A block has optional attributes encoded as a JSON object (see RFC 7159, Section 4, Objects),
  • Finally, a block has optional children, i.e. an heterogeneous collection of blocks or phrases. In the example above, there is one child that is the phrase <p>phrase</p>. And the following example below shows a block with no child:
<!-- wp:ns/block-name {"attributes": "as JSON"} /-->

The complete grammar can be found in the parser’s documentation.

Finally, the parser is used on the editor side, not on the rendering side. Once rendered, the blog post is a regular HTML file. Some blocks are dynamics though, but this is another topic.

block-logic-flow1
The logic flow of the editor (How Little Blocks Work).

The grammar is relatively small. The challenges are however to be as much performant and memory efficient as possible on many platforms. Some posts can reach megabytes, and we don’t want the parser to be the bottleneck. Even if it is used when creating the post state (cf. the schema above), we have measured several seconds to load some posts. Time during which the user is blocked, and waits, or see an error. In other scenarii, we have hit memory limit of the language’s virtual machines.

Hence this experimental project! The current parsers are written in JavaScript (with PEG.js) and in PHP (with phpegjs). This Rust project proposes a parser written in Rust, that can run in the JavaScript and in the PHP virtual machines, and on many other platforms. Let’s try to be very performant and memory efficient!

Why Rust?

That’s an excellent question! Thanks for asking. I can summarize my choice with a bullet list:

  • It is fast, and we need speed,
  • It is memory safe, and also memory efficient,
  • No garbage collector, which simplifies memory management across environments,
  • It can expose a C API (with Foreign Function Interface, FFI), which eases the integration into multiple environments,
  • It compiles to many targets,
  • Because I love it.

One of the goal of the experimentation is to maintain a single implementation (maybe the future reference implementation) with multiple bindings.

The parser

The parser is written in Rust. It relies on the fabulous nom library.

nom
nom will happily take a byte out of your files 🙂.

The source code is available in the src/ directory in the repository. It is very small and fun to read.

The parser produces an Abstract Syntax Tree (AST) of the grammar, where nodes of the tree are defined as:

pub enum Node<'a> {
    Block {
        name: (Input<'a>, Input<'a>),
        attributes: Option<Input<'a>>,
        children: Vec<Node<'a>>
    },
    Phrase(Input<'a>)
}

That’s all! We find again the block name, the attributes and the children, and the phrase. Block children are defined as a collection of node, this is recursive. Input<'a> is defined as &'a [u8], i.e. a slice of bytes.

The main parser entry is the root function. It represents the axiom of the grammar, and is defined as:

pub fn root(
    input: Input
) -> Result<(Input, Vec<ast::Node>), nom::Err<Input>>;

So the parser returns a collection of nodes in the best case. Here is an simple example:

use gutenberg_post_parser::{root, ast::Node};

let input = &b"<!-- wp:foo {\"bar\": true} /-->"[..];
let output = Ok(
    (
        // The remaining data.
        &b""[..],

        // The Abstract Syntax Tree.
        vec![
            Node::Block {
                name: (&b"core"[..], &b"foo"[..]),
                attributes: Some(&b"{\"bar\": true}"[..]),
                children: vec![]
            }
        ]
    )
);

assert_eq!(root(input), output);

The root function and the AST will be the items we are going to use and manipulate in the bindings. The internal items of the parser will stay private.

Bindings

Rust to

From now, our goal is to expose the root function and the Node enum in different platforms or environments. Ready?

3… 2… 1… lift-off!

atoum supports TeamCity

atoum is a popular PHP test framework. TeamCity is a Continuous Integration and Continuous Delivery software developed by Jetbrains. Despites atoum supports many industry standards to report test execution verdicts, TeamCity uses its own non-standard report, and thus atoum is not compatible with TeamCity… until now.

icon_TeamCity

The atoum/teamcity-extension provides TeamCity support inside atoum. When executing tests, the reported verdicts are understandable by TeamCity, and activate all its UI features.

Install

If you have Composer, just run:

$ composer require atoum/teamcity-extension '~1.0'

From this point, you need to enable the extension in your .atoum.php configuration file. The following example forces to enable the extension for every test execution:

$extension = new atoum\teamcity\extension($script);
$extension->addToRunner($runner);

The following example enables the extension only within a TeamCity environment:

$extension = new atoum\teamcity\extension($script);
$extension->addToRunnerWithinTeamCityEnvironment($runner);

This latter installation is recommended. That’s it 🙂.

Glance

The default CLI report looks like this:

Default atoum CLI report

The TeamCity report looks like this in your terminal (note the TEAMCITY_VERSION variable as a way to emulate a TeamCity environment):

TeamCity report inside the terminal

Which is less easy to read. However, when it comes into TeamCity UI, we will have the following result:

TeamCity running atoum

We are using it at Automattic. Hope it is useful for someone else!

If you find any bugs, or would like any other features, please use Github at the following repository: https://github.com/Hywan/atoum-teamcity-extension/.

Welcome to Chaos

Recently, I joined Automattic. This is a world-wide distributed company. The first three weeks you incarn a Happiness Engineer. This is part of the Happiness Rotation duty. This article explains why I loved it, and why I reckon you should do it too.

Happiness Engineer, really?

Does it sound mad as a Cheshire cat? Pretentious maybe? Actually, it’s not at all.

As a Happiness Engineer, I had to make the support. This is part of the Happiness Rotation: Once a year, almost everyone swaps its position to help our users. I will go back on this later.

My role was to make our users happy. To achieve that, I had to:

  • Meet our users, understand who they are, what they want to achieve,
  • Listen to and understand their issues,
  • Find a way to fix the issues.

Meet the users

I need motivations in my job. Learning who our users are, and what they want to achieve, is a great motivation. After these three weeks, I know what my contributions will serve. It gives a meaning to each contribution, to each day I wake up.

Especially in a distributed company on Internet, our users are world-wide, they speak almost all the languages on Earth, they are present on all continents. Their needs vary a lot, they use our softwares in ways I was not able to foresee.

Listen to, understand, and fix their issues

When you are chatting with a “support guy”, you cannot imagine this is a real engineer. This is not a random person filling a pre-defined vague form somewhere where it is cheap to hire her. You will chat with someone very competent. Someone that has no superior. Someone that has all the tools to make you happy.

Personally, when I started, it was the first time I was using WordPress. I was more novice than the user I was talking to. So how to fix it on my end? I had to:

  • Ask help to the right persons,
  • Therefore, meet Automatticians (people working with Automattic),
  • Discover all the interactions between them,
  • Understand the structure of the company,
  • How to ask help, how to formulate my questions, how to reformulate the issues of the users…
  • Discover all the internal tools,
  • Therefore, learn how the softwares work internally and together,
  • Discover the giant internal and public documentations,
  • When needed, create bug reports or feature requests to the appropriated teams,
  • Learn the culture of the company.

This is why it is called Welcome to Chaos. Yes, you have to learn a lot in three weeks, but it is extremely educative. This is like a speed training.

Happiness

I can ensure that when a user is grateful after you fixed its issue, the term Happiness Engineer makes a lot of sense. Automattic provides a lot of freedom to their Happiness Engineers to make people really happy, both in term of tooling or financial.

This is the first time I see a company that is that much generous with its customers.

Thanks buddy

Of course, when embracing the chaos, you are not alone. Everyone is here to help you, and to answer your questions. After all, this is part of the Automattic’s creed (story of the creed):

I will never stop learning. I won’t just work on things that are assigned to me. I know there’s no such thing as a status quo. I will build our business sustainably through passionate and loyal customers. I will never pass up an opportunity to help out a colleague, and I’ll remember the days before I knew everything. I am more motivated by impact than money, and I know that Open Source is one of the most powerful ideas of our generation. I will communicate as much as possible, because it’s the oxygen of a distributed company. I am in a marathon, not a sprint, and no matter how far away the goal is, the only way to get there is by putting one foot in front of another every day. Given time, there is no problem that’s insurmountable.

In addition to everyone willing to help, a buddy was assigned to me. A person that helps and teaches you everytime. This is very helpful. Thank you Hannah!

Happiness Rotation

This experience is great. But after some time, you might forget it. So as a reminder, once a year, you incarn a Happiness Engineer again. This is part of the happiness rotation. As far as I understand, it implies almost everyone in the company.

Note: Obviously, there is permanent happiness engineers.

Conclusion

I deeply think this approach has many advantages. Some of them are listed above. It helps to understand the company, and more importantly the users. The happiness rotation stresses the fact that users are central to Automattic, probably like any companies, but not with this care. Remember the creed: I will build our business sustainably through passionate and loyal customers. To have passionate and loyal users, you need to know them.

For me, it was a great experience. It was chaotic at first, but it is worth it.

Bye bye Liip, hello Automattic

Since April 2017, I have left Liip to join Automattic.

Bye bye Liip

Liip's logo

After almost 20 months at Liip, I am leaving. Liip was a great experience. It was my first industrial non-remote job. It was also my first job in the country I am currently living in. And I have discovered a new way of working.

First industrial non-remote job

Before working for Liip, I was working for fruux. My situation was the following: A french citizen, living as a foreigner in Switzerland, working for a German company, with employees from Germany, Holland, and Canada. Everything happened on chat, mail, and Skype. When my son was born, I had to change my work to simplify my life. It was not the only reason, but one of them.

And before fruux, I was working for INRIA, a research institute in France. It was partially a remote job.

Liip has several offices. I was based in Lausanne.

So, yes, Liip was my first industrial non-remote job. And I liked it. Working in the train on the morning, walking in Lausanne, seeing real people, everything in my local language. Because yes, it was my first job in my native language too.

Everything was simpler. And when you have your first baby, anything else that is simpler saves your life.

Introducing Holacracy

Giant discussions were happening to remove any form of hierarchy in Liip. Then we discovered Holacracy, and we started moving to this system. This is a new governance system. If you are familiar with distributed network topologies in Computer Science, or data structures, it really looks like a Distributed Spanning Tree [DahanPN09]. Note: I am sure that the authors of Holacracy are not aware of DST, but, eh.

So nothing new from a research point of view, but it is cool to see this algorithm coming alive in real life. And it worked: Less meetings, more self-organisation, more shared responsabilities, no more “boss” etc. This is not a tool for all companies, but I am sure that if you are reading my blog, then your company should give it a try.

Open source projects

Liip has been very generous with me regarding my open source engagements. I was involved in Hoa, atoum, and Pickle when joining the company. Liip gave me a 5% budget, so roughly 1 hour per day to work on Hoa. Thank you for that!

After that, I have started a new big project, called Tagua VM. They gave me an additional 5% budget. So I got 2 hours per day to work on Hoa and Tagua VM. Again, thank you for that!

Finally, I have started an in-house open project called The A11y Machine (a11ym for short). I have written a case study for this tool on the Liip’s blog: Accessibility: make your website barrier-free with a11ym!

The goal of a11ym is to automate the accessibility testing of any site by crawling and testing each page. A sweet report is generated, showing all errors, warnings, and notices, with all information needed by the developer to fix the issues as fast as possible.

Dashboard of a11ym, showing the evolution of the accessibility of a site in time
A typical a11ym report listing all errors, warnings, and notices for a given URL

This project has received really good feedbacks from the accessibility community. It has been downloaded 7000 times so far, which is not bad considering the niche it targets.

A new SaaS platform is being build around this software. I enjoyed working on it, and it was really tangible.

Main customer, huge project

Liip is a Web agency, so you have dozens of customers at the same time. However, I was in a special team for an important customer. The site is a luxury watches and jewellery e-commerce platform, located in several countries, in 10 languages, accessible from 16 domains, shared in 2 datacenters. This is not a casual site.

I learned a lot about all the complexity such a site brings: Checkout rules (oh damned…), product catalogs in different formats for different countries with different references, all the business logic inherent to each country, different payment providers, crazy front end compatibilities etc.

I have a hundred of crazy anecdotes to tell. This was clearly not a job for me at first glance: I am a researcher, I have an open source culture background, I am not tailored for this kind of project. But at the end of the story, I learned a lot. Really a lot. I have a better overview of the crazy things any customer can ask, or has to deal with, and the infrastructure craziness that can be set up. I learned how to make better things: How to transform a really crappy software into something understandable by everyone, how to not break a 10+ years old progam with no test etc. And it requires skills. I learned it the hard way, but I learned it.

Why leaving?

Because even if I learned during my time at Liip, the Web agency model was definitively not for me. I am very thankful to every Liiper, I had a great time, I love the Web, but not in an agency.

My son is now 21 months old, and I need fresh air. I can take new challenges.

Welcome Automattic

Automattic's logo

Automattic is the company behind WordPress.com, WooCommerce, Akismet, Simplenote, Cloudup, Simperium, Gravatar and other giant services.

I came to Automattic by coincidence. I was looking for a sponsor for Tagua VM, and someone pointed me out Automattic. After some researches about the company, it appears that it could be a really great place where to work. So I applied.

The hiring process was 4 months long. It was exhausting because it happened at the same time than a big sprint at Liip (remember the SaaS platform for The A11y Machine?). But after 4 months, it appears I succeeded, and I am very glad of that fact!

I am just starting my job at Automattic. I don’t have anything strong and finite to say now, apart that everything is just awesome so far. In few weeks, I am likely to write about my start at Automattic I did, see Welcome to Chaos. They have a very interesting way to get you on board.

Time for a new adventure!

Hello fruux!

Leaving the research world

I have really enjoyed my time at INRIA and Femto-ST, 2 research institutes in France. But after 8 years at the university and a hard PhD thesis (but with great results by the way!), I would like to see other things.

My time as an intern at Mozilla and my work in the open-source world have been very seductive. Open-source contrasts a lot with the research world, where privacy and secrecy are first-citizens of every project. All the work I have made and all the algorithms I have developed during my PhD thesis have been implemented under an open-source license, and I ran into some issues because of such decision (patents are sometimes better, sometimes not… long story).

So, I like research but I also like to hack and share everything. And right now, I have to get a change of air! So I asked on Twitter:

And what a surprise! A lot of companies answered to my tweet (most of them in private of course), but the most interesting one at my eyes was… fruux 😉.

fruux

fruux defines itself as: A unified contacts/calendaring system that works across platforms and devices. We are behind sabre/dav, which is the most popular open-source implementation of the CardDAV and CalDAV standards. Besides us, developers and companies around the globe use our sabre/dav technology to deliver sync functionality to millions of users.

fruux’s logo.

Several things attract me at fruux:

  1. low-layers are open-source,
  2. viable ecosystem based on open-source,
  3. accepts remote working,
  4. close timezone to mine,
  5. touching millions of people,
  6. standards in minds.

The first point is the most important for me. I don’t want to make a company richer without any benefits for the rest of the world. I want my work to be beneficial to the rest of the world, to share my work, I want my work to be reused, hacked, criticized, updated and shared again. This is the spirit of the open-source and the hackability paradigms. And fortunately for me, fruux’s low-layers are 100% open-source, namely sabre/dav & co.

However, being able to eat at the end of the month with open-source is not that simple. Fortunately for me, fruux has a stable economic model, based on open-source. Obviously, I have to work on closed projects, obviously, I have to work for some specific customers, but I can go back to open-source goodnesses all the time 😉.

In addition, I am currently living in Switzerland and fruux is located in Germany. Fortunately for me, fruux’s team is kind of dispatched all around Europe and the world. Naturally, they accept me to work remotely. Whilst it can be inconvenient for some people, I enjoy to have my own hours, to organize myself as I would like etc. Periodical meetings and phone-calls help to stay focused. And I like to think that people are more productive this way. After 4 years at home because of my Master thesis and PhD thesis, I know how to organize myself and exchange with a decentralized team. This is a big advantage. Moreover, Germany is in the same timezone as Switzerland! Compared to companies located at, for instance, California, this is simpler for my family.

Finally, working on an open-source project that is used by millions of users is very motivating. You know that your contributions will touch a lot of people and it gives meaning to my work on a daily basis. Also, the last thing I love at fruux is this desire to respect standards, RFC, recommandations etc. They are involved in these processes, consortiums and groups (for instance CalConnect). I love standards and specifications, and this methodology reminds me the scientific approach I had with my PhD thesis. I consider that a standard without an implementation has no meaning, and a well-designed standard is a piece of a delicious cake, especially when everyone respects this standard 😄.

… but the cake is a lie!

sabre/*

fruux has mostly hired me because of my experience on Hoa. One of my main public job is to work on all the sabre/* libraries, which include:

You will find the documentations and the news on sabre.io.

All these libraries serve the first one: sabre/dav, which is an implementation of the WebDAV technology, including extensions for CalDAV, and CardDAV, respectively for calendars, tasks and address books. For the one who does not know what is WebDAV, in few words: The Web is mostly a read-only media, but WebDAV extends HTTP in order to be able to write and collaborate on documents. The way WebDAV is defined is fascinating, and even more, the way it can be extended.

Most of the work is already done by Evert and many contributors, but we can go deeper! More extensions, more standards, better code, better algorithms etc.!

If you are interested in the work I am doing on sabre/*, you can check this search result on Github.

Future of Hoa

Certain people have asked me about the future of Hoa: Whether I am going to stop it or not since I have a job now.

Firstly, a PhD thesis is exhausting, and believe me, it requires more energy than a regular job, even if you are passionate about your job and you did not count working hours. With a PhD thesis, you have no weekend, no holidays, you are always out of time, you always have a ton (sic) of articles and documents to read… there is no break, no end. In these conditions, I was able to maintain Hoa and to grow the project though, thanks to a very helpful and present community!

Secondly, fruux is planning to use Hoa. I don’t know how or when, but if at a certain moment, using Hoa makes sense, they will. What does it imply for Hoa and me? It means that I will be paid to work on Hoa at a little percentage. I don’t know how much, it will depend of the moments, but this is a big step forward for the project. Moreover, a project like fruux using Hoa is a big chance! I hope to see the fruux’s logo very soon on the homepage of the Hoa’s website.

Thus, to conclude, I will have more time (on evenings, weekends, holidays and sometimes during the day) to work on Hoa. Do not be afraid, the future is bright 😄.

Conclusion

Bref, I am working at fruux!