From Rust to beyond: Prelude

At my work, I had an opportunity to start an experiment: Writing a single parser implementation in Rust for the new Gutenberg post format, bound to many platforms and environments.

gutenberg_logo
The logo of the Gutenberg post parser project.

This series of posts is about those bindings, and explains how to send Rust beyond earth, into many different galaxies. Rust will land in:

  • The WebAssembly galaxy,
  • The ASM.js galaxy,
  • The C galaxy,
  • The PHP galaxy, and
  • The NodeJS galaxy.

The ship is currently flying into the Java galaxy, this series may continue if the ship does not crash or has enough resources to survive!

The Gutenberg post format

Let’s introduce quickly what Gutenberg is, and why a new post format. If you want an in-depth presentation, I highly recommend to read The Language of Gutenberg. Note that this is not required for the reader to understand the Gutenberg post format.

Gutenberg is the next WordPress editor. It is a little revolution on its own. The features it unlocks are very powerful.

The editor will create a new page- and post-building experience that makes writing rich posts effortless, and has “blocks” to make it easy what today might take shortcodes, custom HTML, or “mystery meat” embed discovery. — Matt Mullenweg

The format of a blog post was HTML. And it continues to be. However, another semantics layer is added through annotations. Annotations are written in comments and borrow the XML syntax, e.g.:

<!-- wp:ns/block-name {"attributes": "as JSON"} -->
    <p>phrase</p>
<!-- /wp:ns/block-name -->

The Gutenberg format provides 2 constructions: Block, and Phrase. The example above contains both: There is a block wrapping a phrase. A phrase is basically anything that is not a block. Let’s describe the example:

  • It starts with an annotation (<!-- … -->),
  • The wp: is mandatory to represent a Gutenberg block,
  • It is followed by a fully qualified block name, which is a pair of an optional namespace (here sets to ns, defaults to core) and a block name (here sets to block-name), separated by a slash,
  • A block has optional attributes encoded as a JSON object (see RFC 7159, Section 4, Objects),
  • Finally, a block has optional children, i.e. an heterogeneous collection of blocks or phrases. In the example above, there is one child that is the phrase <p>phrase</p>. And the following example below shows a block with no child:
<!-- wp:ns/block-name {"attributes": "as JSON"} /-->

The complete grammar can be found in the parser’s documentation.

Finally, the parser is used on the editor side, not on the rendering side. Once rendered, the blog post is a regular HTML file. Some blocks are dynamics though, but this is another topic.

block-logic-flow1
The logic flow of the editor (How Little Blocks Work).

The grammar is relatively small. The challenges are however to be as much performant and memory efficient as possible on many platforms. Some posts can reach megabytes, and we don’t want the parser to be the bottleneck. Even if it is used when creating the post state (cf. the schema above), we have measured several seconds to load some posts. Time during which the user is blocked, and waits, or see an error. In other scenarii, we have hit memory limit of the language’s virtual machines.

Hence this experimental project! The current parsers are written in JavaScript (with PEG.js) and in PHP (with phpegjs). This Rust project proposes a parser written in Rust, that can run in the JavaScript and in the PHP virtual machines, and on many other platforms. Let’s try to be very performant and memory efficient!

Why Rust?

That’s an excellent question! Thanks for asking. I can summarize my choice with a bullet list:

  • It is fast, and we need speed,
  • It is memory safe, and also memory efficient,
  • No garbage collector, which simplifies memory management across environments,
  • It can expose a C API (with Foreign Function Interface, FFI), which eases the integration into multiple environments,
  • It compiles to many targets,
  • Because I love it.

One of the goal of the experimentation is to maintain a single implementation (maybe the future reference implementation) with multiple bindings.

The parser

The parser is written in Rust. It relies on the fabulous nom library.

nom
nom will happily take a byte out of your files 🙂.

The source code is available in the src/ directory in the repository. It is very small and fun to read.

The parser produces an Abstract Syntax Tree (AST) of the grammar, where nodes of the tree are defined as:

pub enum Node<'a> {
    Block {
        name: (Input<'a>, Input<'a>), 
        attributes: Option<Input<'a>>, 
        children: Vec<Node<'a>> 
    },
    Phrase(Input<'a>) 
}

That’s all! We find again the block name, the attributes and the children, and the phrase. Block children are defined as a collection of node, this is recursive. Input<'a> is defined as &'a [u8], i.e. a slice of bytes.

The main parser entry is the root function. It represents the axiom of the grammar, and is defined as:

pub fn root(
    input: Input
) -> Result<(Input, Vec<ast::Node>), nom::Err<Input>>;

So the parser returns a collection of nodes in the best case. Here is an simple example:

use gutenberg_post_parser::{root, ast::Node};

let input = &b"<!-- wp:foo {\"bar\": true} /-->"[..];
let output = Ok(
    (
        // The remaining data.
        &b""[..],

        // The Abstract Syntax Tree.
        vec![
            Node::Block {
                name: (&b"core"[..], &b"foo"[..]),
                attributes: Some(&b"{\"bar\": true}"[..]),
                children: vec![]
            }
        ]
    )
);

assert_eq!(root(input), output);

The root function and the AST will be the items we are going to use and manipulate in the bindings. The internal items of the parser will stay private.

Bindings

Rust to

From now, our goal is to expose the root function and the Node enum in different platforms or environments. Ready?

3… 2… 1… lift-off!

How Automattic (WordPress.com & co.) partly moved away from PHPUnit to atoum?

Hello fellow developers and testers,

Few months ago at Automattic, my team and I started a new project: Having better tests for the payment system. The payment system is used by all the services at Automattic, i.e. WordPress, VaultPress, Jetpack, Akismet, PollDaddy etc. It’s a big challenge! Cherry on the cake: Our experiment could define the future of the testing practices for the entire company. No pressure.

This post is a summary about what have been accomplished so far, the achievements, the failures, and the future, focused around manual tests. As the title of this post suggests, we are going to talk about PHPUnit and atoum, which are two PHP test frameworks. This is not a PHPUnit vs. atoum fight. These are observations made for our software, in our context, with our requirements, and our expectations. I think the discussion can be useful for many projects outside Automattic. I would like to apologize in advance if some parts sound too abstract, I hope you understand I can’t reveal any details about the payment system for obvious reasons.

Where we were, and where to go

For historical reasons, WordPress, VaultPress, Jetpack & siblings use PHPUnit for server-side manual tests. There are unit, integration, and system manual tests. There are also end-to-end tests or benchmarks, but we are not interested in them now. When those products were built, PHPUnit was the main test framework in town. Since then, the test landscape has considerably changed in PHP. New competitors, like atoum or Behat, have a good position in the game.

Those tests exist for many years. Some of them grew organically. PHPUnit does not require any form of structure, which is —despite being questionable according to me— a reason for its success. It is a requirement that the code does not need to be well-designed to be tested, but too much freedom on the test side comes with a cost in the long term if there is not enough attention.

Our situation is the following. The code is complex for justified reasons, and the testability is sometimes lessened. Testing across many services is indubitably difficult. Some parts of the code are really old, mixed with others that are new, shiny, and well-done. In this context, it is really difficult to change something, especially moving to another test framework. The amount of work it represents is colossal. Any new test framework does not worth the price for this huge refactoring. But maybe the new test frameworks can help us to better test our code?

I’m a long term contributor of atoum (top 3 contributors). And at the time of writing, I’m a core member. You have to believe me when I say that, at each step of the discussions or the processes, I have been neutral, arguing in favor or against atoum. The idea to switch to atoum partly came from me actually, but my knowledge about atoum is definitively a plus. I am in a good position to know the pros and the cons of the tool, and I’m perfectly aware of how it could solve issues we have.

So after many debates and discussions, we decided to try to move to atoum. A survey and a meeting were scheduled 2 months later to decide whether we should continue or not. Spoiler: We will partly continue with it.

Our needs and requirements

Our code is difficult to test. In other words, the testability is low for some parts of the code. atoum has features to help increase the testability. I will try to summarize those features in the following short sections.

atoum/phpunit-extension

As I said, it’s not possible to rewrite/migrate all the existing tests. This is a colossal effort with a non-neglieable cost. Then, enter atoum/phpunit-extension.

As far as I know, atoum is the only PHP framework that is able to run tests that have been written for another framework. The atoum/phpunit-extension does exactly that. It runs tests written with the PHPUnit API with the atoum engines. This is fabulous! PHPUnit is not required at all. With this extension, we have been able to run our “legacy” (aka PHPUnit) tests with atoum. The following scenarios can be fulfilled:

  • Existing test suites written with the PHPUnit API can be run seamlessly by atoum, no need to rewrite them,
  • Of course, new test suites are written with the atoum API,
  • In case of a test suite migration from PHPUnit to atoum, there are two solutions:
    1. Rewrite the test suite entirely from scratch by logically using the atoum API, or
    2. Only change the parent class from PHPUnit\Framework\TestCase to atoum\phpunit\test, and suddenly it is possible to use both API at the same time (and thus migrate one test case after the other for instance).

This is a very valuable tool for an adventure like ours.

atoum/phpunit-extension is not perfect though. Some PHPUnit APIs are missing. And while the test verdict is strictly the same, error messages can be different, some PHPUnit extensions may not work properly etc. Fortunately, our usage of PHPUnit is pretty raw: No extensions except home-made ones, few hacks… Everything went well. We also have been able to contribute easily to the extension.

Mock engines (plural)

atoum comes with 3 mock engines:

  • Class-like mock engine for classes and interfaces,
  • Function mock engine,
  • Constant mock engine.

Being able to mock global functions or global constants is an important feature for us. It suddenly increases the testability of our code! The following example is fictional, but it’s a good illustration. WordPress is full of global functions, but it is possible to mock them with atoum like this:

public function test_foo()
{
    $this->function->get_userdata = (object) [
        'user_login' => …,
        'user_pass' => …,
        …
    ];
}

In one line of code, it was possible to mock the get_userdata function.

Runner engines

Being able to isolate test execution is a necessity to avoid flakey tests, and to increase the trust we put in the test verdicts. atoum comes with de facto 3 runner engines:

  • Inline, one test case after another in the same process,
  • Isolate, one test case after another but each time in a new process (full isolation),
  • Concurrent, like isolate but tests run concurrently (“at the same time”).

I’m not saying PHPUnit doesn’t have those features. It is possible to run tests in a different process each time —with the isolate engine—, but test execution time blows up, and the isolation is not strict. We don’t use it. The concurrent runner engine in atoum tends to reduce the execution time to be close to the inline engine, while still ensuring a strict isolation.

Fun fact: By using atoum and the atoum/phpunit-extension, we are able to run PHPUnit tests concurrently with a strict isolation!

Code coverage reports

At the time of writing, PHPUnit is not able to generate code coverage reports containing the Branch- or Path Coverage Criteria data. atoum supports them natively with the atoum/reports-extension (including nice graphs, see the demonstration). And we need those data.

The difficulties

On paper, most of the pain points sound addressable. It was time to experiment.

Integration to the Continuous Integration server

Our CI does not natively support standard test execution report formats. Thus we had to create the atoum/teamcity-extension. Learn more by reading a blog post I wrote recently. The TeamCity support is native inside PHPUnit (see the --log-teamcity option).

Bootstrap test environments

Our bootstrap files are… challenging. It’s expected though. Setting up a functional test environment for a software like WordPress.com is not a task one can accomplish in 2 minutes. Fortunately, we have been able to re-use most of the PHPUnit parts.

Today, our unit tests run in complete isolation and concurrently. Our integration tests, and system tests run in complete isolation but not concurrently, due to MySQL limitations. We have solutions, but time needs to be invested.

Generally, even if it works now, it took time to re-organize the bootstrap so that some parts can be shared between the test runners (because we didn’t switch the whole company to atoum yet, it was an experiment).

Documentation and help

Here is an interesting paradox. The majority of the team recognized that atoum’s documentation is better than PHPUnit’s, even if some parts must be rewritten or reworked. But developers already know PHPUnit, so they don’t look at the documentation. If they have to, they will instead find their answers on StackOverflow, or by talking to someone else in the company, but not by checking the official documentation. atoum does not have many StackOverflow threads, and few people are atoum users within the company.

What we have also observed is that when people create a new test, it’s a copy-paste from an existing one. Let’s admit this is a common and natural practice. When a difficulty is met, it’s legit to look at somewhere else in the test repository to check if a similar situation has been resolved. In our context, that information lacked a little bit. We tried to write more and more tests, but not fast enough. It should not be an issue if you have time to try, but in our context, we unfortunately didn’t have this time. The team faced many challenges in the same period, and the tests we are building are not simple Hello, World!s as you might think, so it increases the effort.

To be honest, this was not the biggest difficulty, but still, it is important to notice.

Concurrent integration test executions

Due to some MySQL limitations combined with the complexity of our code, we are not able to run integration (and system) tests concurrently yet. Therefore it takes time to run them, probably too much in our development environments. Even if atoum has friendly options to reduce the debug loop (e.g. see the --loop option), the execution is still slow. The problem can be solved but it requires time, and deep modifications of our code.

Note that with our PHPUnit tests, no isolation is used. This is wrong. And thus we have a smaller trust in the test verdict than with atoum. Almost everyone in the team prefers to have slow test execution but isolation, rather than fast test execution but no confidence in the test verdict. So that’s partly a difficulty. It’s a mix of a positive feature and a needle in the foot, and a needle we can live with. atoum is not responsible of this latency: The state of our code is.

The results

First, let’s start by the positive impacts:

  • In 2 months, we have observed that the testability of our code has been increased by using atoum,
  • We have been able to find bugs in our code that were not detected by PHPUnit, mostly because atoum checks the type of the data,
  • We have been able to migrate “legacy tests” (aka PHPUnit tests) to atoum by just moving the files from one directory to another: What a smooth migration!
  • The trust we put in our test verdict has increased thanks to a strict test execution isolation.

Now, the negative impacts:

  • Even if the testability has been increased, it’s not enough. Right now, we are looking at refactoring our code. Introducing atoum right now was probably too early. Let’s refactor first, then use a better test toolchain later when things will be cleaner,
  • Moving the whole company at once is hard. There are thousands of manual tests. The atoum/phpunit-extension is not magical. We have to come with more solid results, stuff to blow minds. It is necessary to set the institutional inertia in motion. For instance, not being able to run integration and system tests concurrently slows down the builds on the CI; it increases the trust we put in the test verdict, but this latency is not acceptable at the company scale,
  • All the issues we faced can be addressed, but it needs time. The experiment time frame was 2 months. We need 1 or 2 other months to solve the majority of the remaining issues. Note that I was kind of in-charge of this project, but not full time.

We stop using atoum for manual tests. It’s likely to be a pause though. The experiment has shown we need to refactor and clean our code, then there will be a good chance for atoum to come back. The experiment has also shown how to increase the testability of our code: Not everything can be addressed by using another test framework even if it largely participates. We can focus on those points specifically, because we know where they are now. Finally, I reckon it has participated in moving the test infrastructure inside Automattic by showing that something else exists, and that we can go further.

I said we stopped using atoum “for manual tests”. Yes. Because we also have automatically generated tests. The experiment was not only about switching to atoum. Many other aspects of the experiment are still running! For instance, Kitab is used for our code documentation. Kitab is able to (i) render the documentation, and (ii) test the examples written inside the documentation. That way the documentation is ensured to be always up-to-date and working. Kitab generates tests for- and executes tests with atoum. It was easy to set up: We just had to use the existing test bootstraps designed for atoum. We also have another tool to compile HTTP API Blueprint specifications into executable tests. So far, everyone is happy with those tools, no need to go back, everything is automat(t)ic. Other tools are likely to be introduced in the future to automatically generate tests. I want to detail this particular topic in another blog post.

Conclusion

Moving to another test framework is a huge decision with many factors. The fact atoum has atoum/phpunit-extension is a time saver. Nonetheless a new test framework does not mean it will fix all the testability issues of the code. The benefits of the new test framework must largely overtake the costs. In our current context, it was not the case. atoum solves issues that are not our priorities. So yes, atoum can help us to solve important issues, but since these issues are not priorities, then the move to atoum was too early. During the project, we gained new automatic test tools, like Kitab. The experiment is not a failure. Will we retry atoum? It’s very likely. When? I hope in a year.

One conference per day, for one year (2017)

My self-assigned challenge for 2017 was to watch at least one conference per day, for one year. That’s the first time I try this challenge. Let’s dive in for a recap.

267 conferences

In some way, I failed the challenge because I’ve been able to watch only 267 conferences. With an average of 34 minutes per conference, I’ve watched 9078 minutes, or 151 hours of freely available conferences online. Why did I fail to watch 365 of them? Because my first kid was 1.5 years in January 2017, a new little lady came in December 2017, I got a new job, I travelled for my job, I gave talks, I maintain important open source projects requiring lot of time, I’m building my own self-sufficient ecological house, the vegetable garden requires many hours, I watch other videos, and because I’m lazy sometimes. Most of the time, I was able to watch 2 or 3 conferences in a row.

Where to find the resources?

All these conferences are freely available online, on YouTube, or on Vimeo, for most of them. The channel I mostly watch are the following:

It’s very Computer Science centric as you might have noticed, and it targets Rust, C++, Elm, LLVM, or Web technologies (JS, CSS…), but not only, you can find Haskell or Clojure sometimes.

My best-of list

In March 2017, more and more people were questionning me, and asked for sharing. I then decided to start a playlist of my “best-of” conferences. I’ve added 78 conferences in 2017, and 3 new conferences have been added since then.

Thumnails of my “best-of” 2017

Thoughts and conclusion

The challenge was sometimes easy and relaxing, or it was very hard to understand everything especially at 2am after a long day (looking at you CppCon). But it has been a very enjoyable way to learn a lot in a very short period of time. Many speakers are talented, and listening to them is a real pleasure. Some others are just… let’s say unprepared, and it’s good to stop and jump onto another talk. It’s also a good way to get inspired by technologies you don’t necessarily know (for instance, I’m not a big fan of Clojure, but some projects are really inspiring, like Proto REPL).

Sometimes I tweeted about the talk I watched, and it was quite appreciated too. I reckon because it’s a fun and an easy way to learn, especially with the help of video platforms like Youtube.

Am I going to continue this challenge in 2018? Yes! But maybe not at this frequency. It’s now part of my routine to watch conferences many times per week. I like it. I don’t want to stop.

As a closing note, I would like to thank every speakers, and more importantly, every conference organizer. You are doing an amazing job: From the program, to the event, to the final sharing on Internet with everyone. Most of you are volunteers. I know the work it represents. You are producing extremely valuable resources. Thank you!

Random thoughts about `::class` in PHP

The special ::class constant allows for fully qualified class name resolution at compile, this is useful for namespaced classes.

I’m quoting the PHP manual. But things can be funny sometimes. Let’s go through some examples.

  • use A\B as C;
    
    $_ = C::class;

    resolves to A\B, which is perfect 🙂

  • class C
    {
        public function f()
        {
            $_ = self::class;
        }
    }

    resolves to C, which is perfect 😀

  • class C { }
    
    class D extends C
    {
        public function f()
        {
            $_ = parent::class;
        }
    }

    resolves to C, which is perfect 😄

  • class C
    {
        public static function f()
        {
            $_ = static::class;
        }
    }
    
    class D extends C { }
    
    D::f();

    resolves to D, which is perfect 😍

  • 'foo'::class

    resolves to 'foo', which is… huh? 🤨

  • "foo"::class

    resolves to 'foo', which is… expected somehow 😕

  • $a = 'oo';
    "f{$a}"::class

    generates a parse error 🙃

  • PHP_VERSION::class

    resolves to 'PHP_VERSION', which is… strange: It resolves to the fully qualified name of the constant, not the class 🤐

::class is very useful to get rid off of the get_class or the get_called_class functions, or even the get_class($this) trick. This is something truly useful in PHP where entities are referenced as strings, not as symbols. ::class on constants makes sense, but the name is no longer relevant. And finally, ::class on single quote strings is absolutely useless; on double quotes strings it is a source of error because the value can be dynamic (and remember, ::class is resolved at compile time, not at run time).

atoum supports TeamCity

atoum is a popular PHP test framework. TeamCity is a Continuous Integration and Continuous Delivery software developed by Jetbrains. Despites atoum supports many industry standards to report test execution verdicts, TeamCity uses its own non-standard report, and thus atoum is not compatible with TeamCity… until now.

icon_TeamCity

The atoum/teamcity-extension provides TeamCity support inside atoum. When executing tests, the reported verdicts are understandable by TeamCity, and activate all its UI features.

Install

If you have Composer, just run:

$ composer require atoum/teamcity-extension '~1.0'

From this point, you need to enable the extension in your .atoum.php configuration file. The following example forces to enable the extension for every test execution:

$extension = new atoum\teamcity\extension($script);
$extension->addToRunner($runner);

The following example enables the extension only within a TeamCity environment:

$extension = new atoum\teamcity\extension($script);
$extension->addToRunnerWithinTeamCityEnvironment($runner);

This latter installation is recommended. That’s it 🙂.

Glance

The default CLI report looks like this:

Default atoum CLI report

The TeamCity report looks like this in your terminal (note the TEAMCITY_VERSION variable as a way to emulate a TeamCity environment):

TeamCity report inside the terminal

Which is less easy to read. However, when it comes into TeamCity UI, we will have the following result:

TeamCity running atoum

We are using it at Automattic. Hope it is useful for someone else!

If you find any bugs, or would like any other features, please use Github at the following repository: https://github.com/Hywan/atoum-teamcity-extension/.

Export functions in PHP à la Javascript

Warning: This post is totally useless. It is the result of a fun private company thread.

Export functions in Javascript

In Javascript, a file can export functions like this:

export function times2(x) {
    return x * 2;
}

And then we can import this function in another file like this:

import {times2} from 'foo';

console.log(times2(21)); // 42

Is it possible with PHP?

Export functions in PHP

Every entity is public in PHP: Constant, function, class, interface, or trait. They can live in a namespace. So exporting functions in PHP is absolutely useless, but just for the fun, let’s keep going.

A PHP file can return an integer, a real, an array, an anonymous function, anything. Let’s try this:

<?php

return function (int $x): int {
    return $x * 2;
};

And then in another file:

<?php

$times2 = require 'foo.php';
var_dump($times2(21)); // int(42)

Great, it works.

What if our file returns more than one function? Let’s use an array (which has most hashmap properties):

<?php

return [
    'times2' => function (int $x): int {
        return $x * 2;
    },
    'answer' => function (): int {
        return 42;
    }
];

To choose what to import, let’s use the list intrinsic. It has several forms: With or without key matching, long (list(…)) and short syntax ([…]). Because we are modern, we will use the short syntax with key matching to selectively import functions:

<?php

['times2' => $mul] = require 'foo.php';

var_dump($mul(21)); // int(42)

Notice that times2 has been aliased to $mul. What a feature!

Is it useful? Absolutely not. Is it fun? For me it is.