How Automattic (WordPress.com & co.) partly moved away from PHPUnit to atoum?

Hello fellow developers and testers,

Few months ago at Automattic, my team and I started a new project: Having better tests for the payment system. The payment system is used by all the services at Automattic, i.e. WordPress, VaultPress, Jetpack, Akismet, PollDaddy etc. It’s a big challenge! Cherry on the cake: Our experiment could define the future of the testing practices for the entire company. No pressure.

This post is a summary about what have been accomplished so far, the achievements, the failures, and the future, focused around manual tests. As the title of this post suggests, we are going to talk about PHPUnit and atoum, which are two PHP test frameworks. This is not a PHPUnit vs. atoum fight. These are observations made for our software, in our context, with our requirements, and our expectations. I think the discussion can be useful for many projects outside Automattic. I would like to apologize in advance if some parts sound too abstract, I hope you understand I can’t reveal any details about the payment system for obvious reasons.

Where we were, and where to go

For historical reasons, WordPress, VaultPress, Jetpack & siblings use PHPUnit for server-side manual tests. There are unit, integration, and system manual tests. There are also end-to-end tests or benchmarks, but we are not interested in them now. When those products were built, PHPUnit was the main test framework in town. Since then, the test landscape has considerably changed in PHP. New competitors, like atoum or Behat, have a good position in the game.

Those tests exist for many years. Some of them grew organically. PHPUnit does not require any form of structure, which is —despite being questionable according to me— a reason for its success. It is a requirement that the code does not need to be well-designed to be tested, but too much freedom on the test side comes with a cost in the long term if there is not enough attention.

Our situation is the following. The code is complex for justified reasons, and the testability is sometimes lessened. Testing across many services is indubitably difficult. Some parts of the code are really old, mixed with others that are new, shiny, and well-done. In this context, it is really difficult to change something, especially moving to another test framework. The amount of work it represents is colossal. Any new test framework does not worth the price for this huge refactoring. But maybe the new test frameworks can help us to better test our code?

I’m a long term contributor of atoum (top 3 contributors). And at the time of writing, I’m a core member. You have to believe me when I say that, at each step of the discussions or the processes, I have been neutral, arguing in favor or against atoum. The idea to switch to atoum partly came from me actually, but my knowledge about atoum is definitively a plus. I am in a good position to know the pros and the cons of the tool, and I’m perfectly aware of how it could solve issues we have.

So after many debates and discussions, we decided to try to move to atoum. A survey and a meeting were scheduled 2 months later to decide whether we should continue or not. Spoiler: We will partly continue with it.

Our needs and requirements

Our code is difficult to test. In other words, the testability is low for some parts of the code. atoum has features to help increase the testability. I will try to summarize those features in the following short sections.

atoum/phpunit-extension

As I said, it’s not possible to rewrite/migrate all the existing tests. This is a colossal effort with a non-neglieable cost. Then, enter atoum/phpunit-extension.

As far as I know, atoum is the only PHP framework that is able to run tests that have been written for another framework. The atoum/phpunit-extension does exactly that. It runs tests written with the PHPUnit API with the atoum engines. This is fabulous! PHPUnit is not required at all. With this extension, we have been able to run our “legacy” (aka PHPUnit) tests with atoum. The following scenarios can be fulfilled:

  • Existing test suites written with the PHPUnit API can be run seamlessly by atoum, no need to rewrite them,
  • Of course, new test suites are written with the atoum API,
  • In case of a test suite migration from PHPUnit to atoum, there are two solutions:
    1. Rewrite the test suite entirely from scratch by logically using the atoum API, or
    2. Only change the parent class from PHPUnit\Framework\TestCase to atoum\phpunit\test, and suddenly it is possible to use both API at the same time (and thus migrate one test case after the other for instance).

This is a very valuable tool for an adventure like ours.

atoum/phpunit-extension is not perfect though. Some PHPUnit APIs are missing. And while the test verdict is strictly the same, error messages can be different, some PHPUnit extensions may not work properly etc. Fortunately, our usage of PHPUnit is pretty raw: No extensions except home-made ones, few hacks… Everything went well. We also have been able to contribute easily to the extension.

Mock engines (plural)

atoum comes with 3 mock engines:

  • Class-like mock engine for classes and interfaces,
  • Function mock engine,
  • Constant mock engine.

Being able to mock global functions or global constants is an important feature for us. It suddenly increases the testability of our code! The following example is fictional, but it’s a good illustration. WordPress is full of global functions, but it is possible to mock them with atoum like this:

public function test_foo()
{
    $this->function->get_userdata = (object) [
        'user_login' => …,
        'user_pass' => …,
        …
    ];
}

In one line of code, it was possible to mock the get_userdata function.

Runner engines

Being able to isolate test execution is a necessity to avoid flakey tests, and to increase the trust we put in the test verdicts. atoum comes with de facto 3 runner engines:

  • Inline, one test case after another in the same process,
  • Isolate, one test case after another but each time in a new process (full isolation),
  • Concurrent, like isolate but tests run concurrently (“at the same time”).

I’m not saying PHPUnit doesn’t have those features. It is possible to run tests in a different process each time —with the isolate engine—, but test execution time blows up, and the isolation is not strict. We don’t use it. The concurrent runner engine in atoum tends to reduce the execution time to be close to the inline engine, while still ensuring a strict isolation.

Fun fact: By using atoum and the atoum/phpunit-extension, we are able to run PHPUnit tests concurrently with a strict isolation!

Code coverage reports

At the time of writing, PHPUnit is not able to generate code coverage reports containing the Branch- or Path Coverage Criteria data. atoum supports them natively with the atoum/reports-extension (including nice graphs, see the demonstration). And we need those data.

The difficulties

On paper, most of the pain points sound addressable. It was time to experiment.

Integration to the Continuous Integration server

Our CI does not natively support standard test execution report formats. Thus we had to create the atoum/teamcity-extension. Learn more by reading a blog post I wrote recently. The TeamCity support is native inside PHPUnit (see the --log-teamcity option).

Bootstrap test environments

Our bootstrap files are… challenging. It’s expected though. Setting up a functional test environment for a software like WordPress.com is not a task one can accomplish in 2 minutes. Fortunately, we have been able to re-use most of the PHPUnit parts.

Today, our unit tests run in complete isolation and concurrently. Our integration tests, and system tests run in complete isolation but not concurrently, due to MySQL limitations. We have solutions, but time needs to be invested.

Generally, even if it works now, it took time to re-organize the bootstrap so that some parts can be shared between the test runners (because we didn’t switch the whole company to atoum yet, it was an experiment).

Documentation and help

Here is an interesting paradox. The majority of the team recognized that atoum’s documentation is better than PHPUnit’s, even if some parts must be rewritten or reworked. But developers already know PHPUnit, so they don’t look at the documentation. If they have to, they will instead find their answers on StackOverflow, or by talking to someone else in the company, but not by checking the official documentation. atoum does not have many StackOverflow threads, and few people are atoum users within the company.

What we have also observed is that when people create a new test, it’s a copy-paste from an existing one. Let’s admit this is a common and natural practice. When a difficulty is met, it’s legit to look at somewhere else in the test repository to check if a similar situation has been resolved. In our context, that information lacked a little bit. We tried to write more and more tests, but not fast enough. It should not be an issue if you have time to try, but in our context, we unfortunately didn’t have this time. The team faced many challenges in the same period, and the tests we are building are not simple Hello, World!s as you might think, so it increases the effort.

To be honest, this was not the biggest difficulty, but still, it is important to notice.

Concurrent integration test executions

Due to some MySQL limitations combined with the complexity of our code, we are not able to run integration (and system) tests concurrently yet. Therefore it takes time to run them, probably too much in our development environments. Even if atoum has friendly options to reduce the debug loop (e.g. see the --loop option), the execution is still slow. The problem can be solved but it requires time, and deep modifications of our code.

Note that with our PHPUnit tests, no isolation is used. This is wrong. And thus we have a smaller trust in the test verdict than with atoum. Almost everyone in the team prefers to have slow test execution but isolation, rather than fast test execution but no confidence in the test verdict. So that’s partly a difficulty. It’s a mix of a positive feature and a needle in the foot, and a needle we can live with. atoum is not responsible of this latency: The state of our code is.

The results

First, let’s start by the positive impacts:

  • In 2 months, we have observed that the testability of our code has been increased by using atoum,
  • We have been able to find bugs in our code that were not detected by PHPUnit, mostly because atoum checks the type of the data,
  • We have been able to migrate “legacy tests” (aka PHPUnit tests) to atoum by just moving the files from one directory to another: What a smooth migration!
  • The trust we put in our test verdict has increased thanks to a strict test execution isolation.

Now, the negative impacts:

  • Even if the testability has been increased, it’s not enough. Right now, we are looking at refactoring our code. Introducing atoum right now was probably too early. Let’s refactor first, then use a better test toolchain later when things will be cleaner,
  • Moving the whole company at once is hard. There are thousands of manual tests. The atoum/phpunit-extension is not magical. We have to come with more solid results, stuff to blow minds. It is necessary to set the institutional inertia in motion. For instance, not being able to run integration and system tests concurrently slows down the builds on the CI; it increases the trust we put in the test verdict, but this latency is not acceptable at the company scale,
  • All the issues we faced can be addressed, but it needs time. The experiment time frame was 2 months. We need 1 or 2 other months to solve the majority of the remaining issues. Note that I was kind of in-charge of this project, but not full time.

We stop using atoum for manual tests. It’s likely to be a pause though. The experiment has shown we need to refactor and clean our code, then there will be a good chance for atoum to come back. The experiment has also shown how to increase the testability of our code: Not everything can be addressed by using another test framework even if it largely participates. We can focus on those points specifically, because we know where they are now. Finally, I reckon it has participated in moving the test infrastructure inside Automattic by showing that something else exists, and that we can go further.

I said we stopped using atoum “for manual tests”. Yes. Because we also have automatically generated tests. The experiment was not only about switching to atoum. Many other aspects of the experiment are still running! For instance, Kitab is used for our code documentation. Kitab is able to (i) render the documentation, and (ii) test the examples written inside the documentation. That way the documentation is ensured to be always up-to-date and working. Kitab generates tests for- and executes tests with atoum. It was easy to set up: We just had to use the existing test bootstraps designed for atoum. We also have another tool to compile HTTP API Blueprint specifications into executable tests. So far, everyone is happy with those tools, no need to go back, everything is automat(t)ic. Other tools are likely to be introduced in the future to automatically generate tests. I want to detail this particular topic in another blog post.

Conclusion

Moving to another test framework is a huge decision with many factors. The fact atoum has atoum/phpunit-extension is a time saver. Nonetheless a new test framework does not mean it will fix all the testability issues of the code. The benefits of the new test framework must largely overtake the costs. In our current context, it was not the case. atoum solves issues that are not our priorities. So yes, atoum can help us to solve important issues, but since these issues are not priorities, then the move to atoum was too early. During the project, we gained new automatic test tools, like Kitab. The experiment is not a failure. Will we retry atoum? It’s very likely. When? I hope in a year.

atoum supports TeamCity

atoum is a popular PHP test framework. TeamCity is a Continuous Integration and Continuous Delivery software developed by Jetbrains. Despites atoum supports many industry standards to report test execution verdicts, TeamCity uses its own non-standard report, and thus atoum is not compatible with TeamCity… until now.

icon_TeamCity

The atoum/teamcity-extension provides TeamCity support inside atoum. When executing tests, the reported verdicts are understandable by TeamCity, and activate all its UI features.

Install

If you have Composer, just run:

$ composer require atoum/teamcity-extension '~1.0'

From this point, you need to enable the extension in your .atoum.php configuration file. The following example forces to enable the extension for every test execution:

$extension = new atoum\teamcity\extension($script);
$extension->addToRunner($runner);

The following example enables the extension only within a TeamCity environment:

$extension = new atoum\teamcity\extension($script);
$extension->addToRunnerWithinTeamCityEnvironment($runner);

This latter installation is recommended. That’s it 🙂.

Glance

The default CLI report looks like this:

Default atoum CLI report

The TeamCity report looks like this in your terminal (note the TEAMCITY_VERSION variable as a way to emulate a TeamCity environment):

TeamCity report inside the terminal

Which is less easy to read. However, when it comes into TeamCity UI, we will have the following result:

TeamCity running atoum

We are using it at Automattic. Hope it is useful for someone else!

If you find any bugs, or would like any other features, please use Github at the following repository: https://github.com/Hywan/atoum-teamcity-extension/.

Faster find algorithms in nom

Tagua VM is an experimental PHP virtual machine written in Rust and LLVM. It is composed as a set of libraries. One of them that keeps me busy these days is tagua-parser. It contains the lexical and syntactic analysers for the PHP language, in addition to the AST (Abstract Syntax Tree). If you would like to know more about this project, you can see this conference I gave at the PHPTour last week: Tagua VM, a safe PHP virtual machine.

The library tagua-parser is built with parser combinators. Instead of having a classical grammar, compiled to a parser, we write pure functions acting as small parsers. We then combine them together. This post does not explain why this is a sane approach in our context, but keep in mind this is much easier to test, to maintain, and to optimise.

Because this project is complex enought, we are delegating the parser combinator implementation to nom.

nom is a parser combinators library written in Rust. Its goal is to provide tools to build safe parsers without compromising the speed or memory consumption. To that end, it uses extensively Rust’s strong typing, zero copy parsing, push streaming, pull streaming, and provides macros and traits to abstract most of the error prone plumbing.

Recently, I have been working on optimisations in the FindToken and FindSubstring traits from nom itself. These traits provide methods to find a token (i.e. a lexeme), and to find a substring, crazy naming. However, this is not totally valid: FindToken expects to find a single item (if implemented for u8, it will look for a u8 in a &[u8]), and FindSubstring really is about finding a substring, so a token of any length.

It appeared that these methods can be optimised in some cases. Both default implementations are using Rust iterators: Regular iterator for FindToken, and window iterator for FindSubstring, i.e. an iterator over overlapping subslices of a given length. We have benchmarked big PHP comments, which are analysed by parsers actively using these two trait implementations.

Here are the result, before and after our optimisations:

test …::bench_span ... bench:      73,433 ns/iter (+/- 3,869)
test …::bench_span ... bench:      15,986 ns/iter (+/- 3,068)

A boost of 78%! Nice!

The pull request has been merged today, thank you Geoffroy Couprie! The new algorithms heavily rely on the memchr crate. So all the credits should really go to Andrew Gallant! This crate provides a safe interface libc‘s memchr and memrchr. It also provides fallback implementations when either function is unavailable.

The new algorithms are only implemented for &[u8] though. Fortunately, the implementation for &str fallbacks to the former.

This is small contribution, but it brings a very nice boost. Hope it will benefit to other projects!

I am also blowing the dust off of Algorithms on Strings, by M. Crochemore, C. Hancart, and T. Lecroq. I am pretty sure it should be useful for nom and tagua-parser. If you haven’t read this book yet, I can only encourage you to do so!

sabre/katana

sabre/katana's logo
Project’s logo.

What is it?

sabre/katana is a contact, calendar, task list and file server. What does it mean? Assuming nowadays you have multiple devices (PC, phones, tablets, TVs…). If you would like to get your address books, calendars, task lists and files synced between all these devices from everywhere, you need a server. All your devices are then considered as clients.

But there is an issue with the server. Most of the time, you might choose Google or maybe Apple, but one may wonder: Can we trust these servers? Can we give them our private data, like all our contacts, our calendars, all our photos…? What if you are a company or an association and you have sensitive data that are really private or strategic? So, can you still trust them? Where the data are stored? Who can look at these data? More and more, there is a huge need for “personal” server.

Moreover, servers like Google or Apple are often closed: You reach your data with specific clients and they are not available in all platforms. This is for strategic reasons of course. But with sabre/katana, you are not limited. See the above schema: Firefox OS can talk to iOS or Android at the same time.

sabre/katana is this kind of server. You can install it on your machine and manage users in a minute. Each user will have a collection of address books, calendars, task lists and files. This server can talk to a loong list of devices, mainly thanks to a scrupulous respect of industrial standards:

  • Mac OS X:
    • OS X 10.10 (Yosemite),
    • OS X 10.9 (Mavericks),
    • OS X 10.8 (Mountain Lion),
    • OS X 10.7 (Lion),
    • OS X 10.6 (Snow Leopard),
    • OS X 10.5 (Leopard),
    • BusyCal,
    • BusyContacts,
    • Fantastical,
    • Rainlendar,
    • ReminderFox,
    • SoHo Organizer,
    • Spotlife,
    • Thunderbird ,
  • Windows:
    • eM Client,
    • Microsoft Outlook 2013,
    • Microsoft Outlook 2010,
    • Microsoft Outlook 2007,
    • Microsoft Outlook with Bynari WebDAV Collaborator,
    • Microsoft Outlook with iCal4OL,
    • Rainlendar,
    • ReminderFox,
    • Thunderbird,
  • Linux:
    • Evolution,
    • Rainlendar,
    • ReminderFox,
    • Thunderbird,
  • Mobile:
    • Android,
    • BlackBerry 10,
    • BlackBerry PlayBook,
    • Firefox OS,
    • iOS 8,
    • iOS 7,
    • iOS 6,
    • iOS 5,
    • iOS 4,
    • iOS 3,
    • Nokia N9,
    • Sailfish.

Did you find your device in this list? Probably yes 😉.

sabre/katana sits in the middle of all your devices and synced all your data. Of course, it is free and open source. Go check the source!

List of features

Here is a non-exhaustive list of features supported by sabre/katana. Depending whether you are a user or a developer, the features that might interest you are radically not the same. I decided to show you a list from the user point of view. If you would like to get a list from the developer point of view, please see this exhaustive list of supported RFC for more details.

Contacts

All usual fields are supported, like phone numbers, email addresses, URLs, birthday, ringtone, texttone, related names, postal addresses, notes, HD photos etc. Of course, groups of cards are also supported.

My card on Mac OS X
My card inside the native Contact application of Mac OS X.
My card on Firefox OS
My card inside the native Contact application of Firefox OS.

My photo is not in HD, I really have to update it!

Cards can be encoded into several formats. The most usual format is VCF. sabre/katana allows you to download the whole address book of a user as a single VCF file. You can also create, update and delete address books.

Calendars

A calendar is just a set of events. Each event has several properties, such as a title, a location, a date start, a date end, some notes, URLs, alarms etc. sabre/katana also support recurring events (“each last Monday of the month, at 11am…”), in addition to scheduling (see bellow).

My calendars on Mac OS X
My calendars inside the native Calendar application of Mac OS X.
My calendars on Firefox OS
My calendars inside the native Calendar application of Firefox OS.

Few words about calendar scheduling. Let’s say you are organizing an event, like New release (we always enjoy release day!). You would like to invite several people but you don’t know if they could be present or not. In your event, all you have to do is to add attendees. How are they going to be notified about this event? Two situations:

  1. Either attendees are registered on your sabre/katana server and they will receive an invite inside their calendar application (we call this iTIP),
  2. Or they are not registered on your server and they will receive an email with the event as an attached file (we call this iMIP). All they have to do is to open this event in their calendar application.
Typical mail to invite an attendee to an event
Invite an attendee by email because she is not registered on your sabre/katana server.

Notice the gorgeous map embedded inside the email!

Once they received the event, they can accept, decline or “don’t know” (they will try to be present at) the event.

Receive an invite to an event
Receive an invite to an event. Here: Gordon is inviting Hywan. Three choices for Hywan:

, or

.
Status of all attendees
Hywan has accepted the event. Here is what the event looks like. Hywan can see the response of each attendees.
Notification from attendees
Gordon is even notified that Hywan has accepted the event.

Of course, attendees will be notified too if the event has been moved, canceled, refreshed etc.

Calendars can be encoded into several formats. The most usal format is ICS. sabre/katana allows you to download the whole calendar of a user as a single ICS file. You can also create, update and delete calendars.

Task lists

A task list is exactly like a calendar (from a programmatically point of view). Instead of containg event objects, it contains todo objects.

sabre/katana supports group of tasks, reminder, progression etc.

My task lists on Mac OS X
My task lists inside the native Reminder application of Mac OS X.

Just like calendars, task lists can be encoded into several formats, whose ICS. sabre/katana allows you to download the whole task list of a user as a single ICS file. You can also create, update and delete task lists.

Files

Finally, sabre/katana creates a home collection per user: A personal directory that can contain files and directories and… synced between all your devices (as usual 😄).

sabre/katana also creates a special directory called public/ which is a public directory. Every files and directories stored inside this directory are accessible to anyone that has the correct link. No listing is prompted to protect your public data.

Just like contact, calendar and task list applications, you need a client application to connect to your home collection on sabre/katana.

Connect to a server in Mac OS X
Connect to a server with the Finder application of Mac OS X.

Then, your public directory on sabre/katana will be a regular directory as every other.

List of my files
List of my files, right here in the Finder application of Mac OS X.

sabre/katana is able to store any kind of files. Yes, any kinds. It’s just files. However, it white-lists the kind of files that can be showed in the browser. Only images, audios, videos, texts, PDF and some vendor formats (like Microsoft Office) are considered as safe (for the server). This way, associations can share musics, videos or images, companies can share PDF or Microsoft Word documents etc. Maybe in the future sabre/katana might white-list more formats. If a format is not white-listed, the file will be forced to download.

How is sabre/katana built?

sabre/katana is based on two big and solid projects:

  1. sabre/dav,
  2. Hoa.

sabre/dav is one of the most powerful CardDAV, CalDAV and WebDAV framework in the planet. Trusted by the likes of Atmail, Box, fruux and ownCloud, it powers millions of users world-wide! It is written in PHP and is open source.

Hoa is a modular, extensible and structured set of PHP libraries. Fun fact: Also open source, this project is also trusted by ownCloud, in addition to Mozilla, joliCode etc. Recently, this project has recorded more than 600,000 downloads and the community is about to reach 1000 people.

sabre/katana is then a program based on sabre/dav for the DAV part and Hoa for everything else, like the logic code inside the sabre/dav‘s plugins. The result is a ready-to-use server with a nice interface for the administration.

To ensure code quality, we use atoum, a popular and modern test framework for PHP. So far, sabre/dav has more than 1000 assertions.

Conclusion

sabre/katana is a server for contacts, calendars, task lists and files. Everything is synced, everytime and everywhere. It perfectly connects to a lot of devices on the market. Several features we need and use daily have been presented. This is the easiest and a secure way to host your own private data.

Go download it!

Control the terminal, the right way

Nowadays, there are plenty of terminal emulators in the wild. Each one has a specific way to handle controls. How many colours does it support? How to control the style of a character? How to control more than style, like the cursor or the window? In this article, we are going to explain and show in action the right ways to control your terminal with a portable and an easy to maintain API. We are going to talk about stat, tput, terminfo, Hoa\Console… but do not be afraid, it’s easy and fun!

Introduction

Terminals. They are the ancient interfaces, still not old fashioned yet. They are fast, efficient, work remotely with a low bandwidth, secured and very simple to use.

A terminal is a canvas composed of columns and lines. Only one character fits at a position. According to the terminal, we have some features enabled; for instance, a character might be stylized with a colour, a decoration, a weight etc. Let’s consider the former. A colour belongs to a palette, which contains either 2, 8, 256 or more colours. One may wonder:

  • How many colours does a terminal support?
  • How to control the style of a character?
  • How to control more than style, like the cursor or the window?

Well, this article is going to explain how a terminal works and how we interact with it. We are going to talk about terminal capabilities, terminal information (stored in database) and Hoa\Console, a PHP library that provides advanced terminal controls.

The basis of a terminal

A terminal, or a console, is an interface that allows to interact with the computer. This interface is textual. Like a graphical interface, there are inputs: The keyboard and the mouse, and ouputs: The screen or a file (a real file, a socket, a FIFO, something else…).

There is a ton of terminals. The most famous ones are:

Whatever the terminal you use, inputs are handled by programs (or processus) and outputs are produced by these latters. We said outputs can be the screen or a file. Actually, everything is a file, so the screen is also a file. However, the user is able to use redirections to choose where the ouputs must go.

Let’s consider the echo program that prints all its options/arguments on its output. Thus, in the following example, foobar is printed on the screen:

$ echo 'foobar'

And in the following example, foobar is redirected to a file called log:

$ echo 'foobar' > log

We are also able to redirect the output to another program, like wc that counts stuff:

$ echo 'foobar' | wc -c
7

Now we know there are 7 characters in foobar… no! echo automatically adds a new-line (\n) after each line; so:

$ echo -n 'foobar' | wc -c
6

This is more correct!

Detecting type of pipes

Inputs and outputs are called pipes. Yes, trivial, this is nothing more than basic pipes!

Pipes are like a game, see Mario 😉!

There are 3 standard pipes:

  • STDIN, standing for the standard input pipe,
  • STDOUT, standing for the standard output pipe and
  • STDERR, standing for the standard error pipe (also an output one).

If the output is attached to the screen, we say this is a “direct output”. Why is it important? Because if we stylize a text, this is only for the screen, not for a file. A file should receive regular text, not all the decorations and styles.

Hopefully, the Hoa\Console\Console class provides the isDirect, isPipe and isRedirection static methods to know whether the pipe is respectively direct, a pipe or a redirection (damn naming…!). Thus, let Type.php be the following program:

echo 'is direct:      ';
var_dump(Hoa\Console\Console::isDirect(STDOUT));

echo 'is pipe:        ';
var_dump(Hoa\Console\Console::isPipe(STDOUT));

echo 'is redirection: ';
var_dump(Hoa\Console\Console::isRedirection(STDOUT));

Now, let’s test our program:

$ php Type.php
is direct:      bool(true)
is pipe:        bool(false)
is redirection: bool(false)

$ php Type.php | xargs -I@ echo @
is direct:      bool(false)
is pipe:        bool(true)
is redirection: bool(false)

$ php Type.php > /tmp/foo; cat !!$
is direct:      bool(false)
is pipe:        bool(false)
is redirection: bool(true)

The first execution is very classic. STDOUT, the standard output, is direct. The second execution redirects the output to another program, then STDOUT is of kind pipe. Finally, the last execution redirects the output to a file called /tmp/foo, so STDOUT is a redirection.

How does it work? We use fstat to read the mode of the file. The underlying fstat implementation is defined in C, so let’s take a look at the documentation of fstat(2). stat is a C structure that looks like:

struct stat {
    dev_t    st_dev;              /* device inode resides on             */
    ino_t    st_ino;              /* inode's number                      */
    mode_t   st_mode;             /* inode protection mode               */
    nlink_t  st_nlink;            /* number of hard links to the file    */
    uid_t    st_uid;              /* user-id of owner                    */
    gid_t    st_gid;              /* group-id of owner                   */
    dev_t    st_rdev;             /* device type, for special file inode */
    struct timespec st_atimespec; /* time of last access                 */
    struct timespec st_mtimespec; /* time of last data modification      */
    struct timespec st_ctimespec; /* time of last file status change     */
    off_t    st_size;             /* file size, in bytes                 */
    quad_t   st_blocks;           /* blocks allocated for file           */
    u_long   st_blksize;          /* optimal file sys I/O ops blocksize  */
    u_long   st_flags;            /* user defined flags for file         */
    u_long   st_gen;              /* file generation number              */
}

The value of mode returned by the PHP fstat function is equal to st_mode in this structure. And st_mode has the following bits:

#define S_IFMT   0170000 /* type of file mask                */
#define S_IFIFO  0010000 /* named pipe (fifo)                */
#define S_IFCHR  0020000 /* character special                */
#define S_IFDIR  0040000 /* directory                        */
#define S_IFBLK  0060000 /* block special                    */
#define S_IFREG  0100000 /* regular                          */
#define S_IFLNK  0120000 /* symbolic link                    */
#define S_IFSOCK 0140000 /* socket                           */
#define S_IFWHT  0160000 /* whiteout                         */
#define S_ISUID  0004000 /* set user id on execution         */
#define S_ISGID  0002000 /* set group id on execution        */
#define S_ISVTX  0001000 /* save swapped text even after use */
#define S_IRWXU  0000700 /* RWX mask for owner               */
#define S_IRUSR  0000400 /* read permission, owner           */
#define S_IWUSR  0000200 /* write permission, owner          */
#define S_IXUSR  0000100 /* execute/search permission, owner */
#define S_IRWXG  0000070 /* RWX mask for group               */
#define S_IRGRP  0000040 /* read permission, group           */
#define S_IWGRP  0000020 /* write permission, group          */
#define S_IXGRP  0000010 /* execute/search permission, group */
#define S_IRWXO  0000007 /* RWX mask for other               */
#define S_IROTH  0000004 /* read permission, other           */
#define S_IWOTH  0000002 /* write permission, other          */
#define S_IXOTH  0000001 /* execute/search permission, other */

Awesome, we have everything we need! We mask mode with S_IFMT to get the file data. Then we just have to check whether it is a named pipe S_IFIFO, a character special S_IFCHR etc. Concretly:

  • isDirect checks that the mode is equal to S_IFCHR, it means it is attached to the screen (in our case),
  • isPipe checks that the mode is equal to S_IFIFO: This is a special file that behaves like a FIFO stack (see the documentation of mkfifo(1)), everything which is written is directly read just after and the reading order is defined by the writing order (first-in, first-out!),
  • isRedirection checks that the mode is equal to S_IFREG, S_IFDIR, S_IFLNK, S_IFSOCK or S_IFBLK, in other words: All kind of files on which we can apply a redirection. Why? Because the STDOUT (or another STD* pipe) of the current processus is defined as a file pointer to the redirection destination and it can be only a file, a directory, a link, a socket or a block file.

I encourage you to read the implementation of the Hoa\Console\Console::getMode method.

So yes, this is useful to enable styles on text but also to define the default verbosity level. For instance, if a program outputs the result of a computation with some explanations around, the highest verbosity level would output everything (the result and the explanations) while the lowest level would output only the result. Let’s try with the toUpperCase.php program:

$verbose = Hoa\Console\Console::isDirect(STDOUT);
$string  = $argv[1];
$result  = (new Hoa\String\String($string))->toUpperCase();

if(true === $verbose)
    echo $string, ' becomes ', $result, ' in upper case!', "\n";
else
    echo $result, "\n";

Then, let’s execute this program:

$ php toUpperCase.php 'Hello world!'
Hello world! becomes HELLO WORLD! in upper case!

And now, let’s execute this program with a pipe:

$ php toUpperCase.php 'Hello world!' | xargs -I@ echo @
HELLO WORLD!

Useful and very simple, isn’t it?

Terminal capabilities

We can control the terminal with the inputs, like the keyboard, but we can also control the outputs. How? With the text itself. Actually, an output does not contain only the text but it includes control functions. It’s like HTML: Around a text, you can have an element, specifying that the text is a link. It’s exactly the same for terminals! To specify that a text must be in red, we must add a control function around it.

Hopefully, these control functions have been standardized in the ECMA-48 document: Control Functions for Coded Character Set. However, not all terminals implement all this standard, and for historical reasons, some terminals use slightly different control functions. Moreover, some information do not belong to this standard (because this is out of its scope), like: How many colours does the terminal support? or does the terminal support the meta key?

Consequently, each terminal has a list of capabilities. This list is splitted in 3 categories:

  • boolean capabilities,
  • number capabilities,
  • string capabilities.

For instance:

  • the “does the terminal support the meta key” is a boolean capability called meta_key where its value is true or false,
  • the “number of colours supported by the terminal” is a… number capability called max_colors where its value can be 2, 8, 256 or more,
  • the “clear screen control function” is a string capability called clear_screen where its value might be \e[H\e[2J,
  • the “move the cursor one column to the right” is also a string capability called cursor_right where its value might be \e[C.

All the capabilities can be found in the documentation of terminfo(5) or in the documentation of xcurses. I encourage you to follow these links and see how rich the terminal capabilities are!

Terminal information

Terminal capabilities are stored as information in databases. Where are these databases located? In files with a binary format. Favorite locations are:

  • /usr/share/terminfo,
  • /usr/share/lib/terminfo,
  • /lib/terminfo,
  • /usr/lib/terminfo,
  • /usr/local/share/terminfo,
  • /usr/local/share/lib/terminfo,
  • etc.
  • or the TERMINFO or TERMINFO_DIRS environment variables.

Inside these directories, we have a tree of the form: xx/name, where xx is the ASCII value in hexadecimal of the first letter of the terminal name name, or n/name where n is the first letter of the terminal name. The terminal name is stored in the TERM environment variable. For instance, on my computer:

$ echo $TERM
xterm-256color
$ file /usr/share/terminfo/78/xterm-256color
/usr/share/terminfo/78/xterm-256color: Compiled terminfo entry

We can use the Hoa\Console\Tput class to retrieve these information. The getTerminfo static method allows to get the path of the terminal information file. The getTerm static method allows to get the terminal name. Finally, the whole class allows to parse a terminal information database (it will use the file returned by getTerminfo by default). For instance:

$tput = new Hoa\Console\Tput();
var_dump($tput->count('max_colors'));

/**
 * Will output:
 *     int(256)
 */

On my computer, with xterm-256color, I have 256 colours, as expected. If we parse the information of xterm and not xterm-256color, we will have:

$tput = new Hoa\Console\Tput(Hoa\Console\Tput::getTerminfo('xterm'));
var_dump($tput->count('max_colors'));

/**
 * Will output:
 *     int(8)
 */

The power in your hand: Control the cursor

Let’s summarize. We are able to parse and know all the terminal capabilities of a specific terminal (including the one of the current user). If we would like a powerful terminal API, we need to control the basis, like the cursor.

Remember. We said that the terminal is a canvas of columns and lines. The cursor is like a pen. We can move it and write something. We are going to (partly) see how the Hoa\Console\Cursor class works.

I like to move it!

The moveTo static method allows to move the cursor to an absolute position. For example:

Hoa\Console\Cursor::moveTo($x, $y);

The control function we use is cursor_address. So all we need to do is to use the Hoa\Console\Tput class and call the get method on it to get the value of this string capability. This is a parameterized one: On xterm-256color, its value is e[%i%p1%d;%p2%dH. We replace the parameters by $x and $y and we output the result. That’s all! We are able to move the cursor on an absolute position on all terminals! This is the right way to do.

We use the same strategy for the move static method that moves the cursor relatively to its current position. For example:

Hoa\Console\Cursor::move('right up');

We split the steps and for each step we read the appropriated string capability using the Hoa\Console\Tput class. For right, we read the parm_right_cursor string capability, for up, we read parm_up_cursor etc. Note that parm_right_cursor is different of cursor_right: The first one is used to move the cursor a certain number of times while the second one is used to move the cursor only one time. With performances in mind, we should use the first one if we have to move the cursor several times.

The getPosition static method returns the position of the cursor. This way to interact is a little bit different. We must write a control function on the output, and then, the terminal replies on the input. See the implementation by yourself.

print_r(Hoa\Console\Cursor::getPosition());

/**
 * Will output:
 *     Array
 *     (
 *         [x] => 7
 *         [y] => 42
 *     )
 */

In the same way, we have the save and restore static methods that save the current position of the cursor and restore it. This is very useful. We use the save_cursor and restore_cursor string capabilities.

Also, the clear static method splits some parts to clear. For each part (direction or way), we read from Hoa\Console\Tput the appropriated string capabilities: clear_screen to clear all the screen, clr_eol to clear everything on the right of the cursor, clr_eos to clear everything bellow the cursor etc.

Hoa\Console\Cursor::clear('left');

See what we learnt in action:

echo 'Foobar', "\n",
     'Foobar', "\n",
     'Foobar', "\n",
     'Foobar', "\n",
     'Foobar', "\n";

           Hoa\Console\Cursor::save();
sleep(1);  Hoa\Console\Cursor::move('LEFT');
sleep(1);  Hoa\Console\Cursor::move('↑');
sleep(1);  Hoa\Console\Cursor::move('↑');
sleep(1);  Hoa\Console\Cursor::move('↑');
sleep(1);  Hoa\Console\Cursor::clear('↔');
sleep(1);  echo 'Hahaha!';
sleep(1);  Hoa\Console\Cursor::restore();

echo "\n", 'Bye!', "\n";

The result is presented in the following figure.

Saving, moving, clearing and restoring the cursor with Hoa\Console.

The resulting API is portable, clean, simple to read and very easy to maintain! This is the right way to do.

To get more information, please read the documentation.

Colours and decorations

Now: Colours. This is mainly the reason why I decided to write this article. We see the same and the same libraries, again and again, doing only colours in the terminal, but unfortunately not in the right way 😞.

A terminal has a palette of colours. Each colour is indexed by an integer, from 0 to potentially + . The size of the palette is described by the max_colors number capability. Usually, a palette contains 1, 2, 8, 256 or 16 million colours.

The xterm-256color palette.

So first thing to do is to check whether we have more than 1 colour. If not, we must not colorize the given text. Next, if we have less than 256 colours, we have to convert the style into a palette containing 8 colours. Same with less than 16 million colours, we have to convert into 256 colours.

Moreover, we can define the style of the foreground or of the background with respectively the set_a_foreground and set_a_background string capabilities. Finally, in addition to colours, we can define other decorations like bold, underline, blink or even inverse the foreground and the background colours.

One thing to remember is: With this capability, we only define the style at a given “pixel” and it will apply on the following text. In this case, it is not exactly like HTML where we have a beginning and an end. Here we only have a beginning. Let’s try!

Hoa\Console\Cursor::colorize('underlined foreground(yellow) background(#932e2e)');
echo 'foo';
Hoa\Console\Cursor::colorize('!underlined background(normal)');
echo 'bar', "\n";

The API is pretty simple: We start to underline the text, we set the foreground to yellow and we set the background to #932e2e  . Then we output something. We continue with cancelling the underline decoration in addition to resetting the background. Finally we output something else. Here is the result:

Fun with Hoa\Console\Cursor::colorize.

What do we observe? My terminal does not support more than 256 colours. Thus, #932e2e is automatically converted into the closest colour in my actual palette! This is the right way to do.

For fun, you can change the colours in the palette with the Hoa\Console\Cursor::changeColor static method. You can also change the style of the cursor, like , _ or |.

To get more information, please read the documentation.

The power in your hand: Readline

A more complete usage of Hoa\Console\Cursor and even Hoa\Console\Window is the Hoa\Console\Readline class that is a powerful readline. More than autocompleters, history, key bindings etc., it has an advanced use of cursors. See this in action:

An autocompletion menu, made with Hoa\Console\Cursor and Hoa\Console\Window.

We use Hoa\Console\Cursor to move the cursor or change the colours and Hoa\Console\Window to get the dimensions of the window, scroll some text in it etc. I encourage you to read the implementation.

To get more information, please read the documentation.

The power in your hand: Sound 🎵

Yes, even sound is defined by terminal capabilities. The famous bip is given by the bell string capability. You would like to make a bip? Easy:

$tput = new Hoa\Console\Tput();
echo $tput->get('bell');

That’s it!

Bonus: Window

As a bonus, a quick demo of Hoa\Console\Window because it’s fun.

The video shows the execution of the following code:

Hoa\Console\Window::setSize(80, 35);
var_dump(Hoa\Console\Window::getPosition());

foreach([[100, 100], [150, 150], [200, 100], [200, 80],
         [200,  60], [200, 100]] as list($x, $y)) {

    sleep(1);  Hoa\Console\Window::moveTo($x, $y);
}

sleep(2);  Hoa\Console\Window::minimize();
sleep(2);  Hoa\Console\Window::restore();
sleep(2);  Hoa\Console\Window::lower();
sleep(2);  Hoa\Console\Window::raise();

We resize the window, we get its position, we move the window on the screen, we minimize and restore it, and finally we put it behind all other windows just before raising it.

To get more information, please read the documentation.

Conclusion

In this article, we saw how to control the terminal by: Firstly, detecting the type of pipes, and secondly, reading and using the terminal capabilities. We know where these capabilities are stored and we saw few of them in action.

This approach ensures your code will be portable, easy to maintain and easy to use. The portability is very important because, like browsers and user devices, we have a lot of terminal emulators released in the wild. We have to care about them.

I encourage you to take a look at the Hoa\Console library and to contribute to make it even more awesome 😄.

atoum has two release managers

What is atoum?

Short introduction: atoum is a simple, modern and intuitive unit testing framework for PHP. Originally created by Frédéric Hardy, a good friend, it has grown thanks to many contributors.

atoum’s logo.

No one can say that atoum is not simple or intuitive. The framework offers several awesome features and is more like a meta unit testing framework. Indeed, the “user-land” of atoum, I mean all the assertions API (“this is an integer and it is equal to…”) is based on a very flexible mechanism, handled or embedded in runners, reporters etc. Thus, the framework is very extensible. You can find more informations in the README.md of the project: Why atoum?.

Several important projects or companies use atoum. For instance, Pickle, the PHP Extension installer, created by Pierre Joye, another friend (the world is very small 😉) use atoum for its unit tests. Another example with M6Web, the geeks working at M6, the most profitable private national French TV channel, also use atoum. Another example, Mozilla is using atoum to test some of their applications.

Where is the cap’tain?

Since the beginning, Frédéric has been a great leader for the project. He has inspired many people, users and contributors. In real life, on stage, on IRC… its personality and charisma were helpful in all aspects. However, leading such a project is a challenging and nerve-wracking daily work. I know what I am talking about with Hoa. Hopefully for Frédéric, some contributors were here to help.

Where to go cap’tain?

However, having contributors do not create a community. A community is a group of people that share something together. A project needs a community with strong connections. They do no need to all look at the same direction, but they have to share something. In the case of atoum, I would say the project has been victim of its own success. We have seen the number of users increasing very quickly and the project was not yet ready for such a massive use. The documentation was not ready, a lot of features were not finalized, there were few contributors and the absence of a real community did not help. Put all these things together, blend them together and you obtain a bomb 😄. The project leaders were under a terrible pressure.

In these conditions, this is not easy to work. Especially when users ask for new features. The needs to have a roadmap and people taking decisions were very strong.

When the community acts

After a couple of months under the sea, we have decided that we need to create a structure around the project. An organization. Frédéric is not able to do everything by himself. That’s why 2 release managers have been elected: Mikaël Randy and I. Thank you to Julien Bianchi, another friend 😉, for having organized these elections and being one of the most active contributor of atoum!

Our goal is to define the roadmap of atoum:

  • what will be included in the next version and what will not,
  • what features need work,
  • what bugs or issues need to be solved,
  • etc.

Well, a release manager is a pretty common job.

Why 2? To avoid the bus effect and delegate. We all have a family, friends, jobs and side projects. With 2 release managers, we have 2 times more time to organize this project, and it deserves such an amount of time.

The goal is also to organize the community if it is possible. New great features are coming and they will allow more people to contribute and build their “own atoum”. See below.

Features to port!

Everything is not defined at 100% but here is an overview of what is coming.

Baba, from Astérix and Obélix.

First of all, you will find the latest issues and bugs we have to close before the first release.

Second, you will notice the version number… 1.0.0. Yes! atoum will have tags! After several discussions (#261, #300, #342, #349…), even if atoum is rolling-released, it will have tags. And with the semver format. More informations on the blog of Julien Bianchi: atoum embraces semver.

Finally, a big feature is the Extension API, that allows to write extension, such as:

class Foo {

    protected function bar ( $arg ) {

        return $arg;
    }
}

// and…

class Foo extends atoum\test {

    public function testBaz ( ) {

        $this
            ->if($sut = new \Foo())
            ->and($arg = 'bar')
            ->then
                ->variable($this->invoke($sut)->bar($arg))->isEqualTo($arg);
    }
}

Now you will be able to test your protected and private methods!

  • atoum/bdd-extension, allows to write tests with the behavior-driven development style and vocabulary; example:
class Formatter extends atoum\spec {

    public function should_format_underscore_separated_method_name ( ) {

        $this
            ->given($formatter = new testedClass())
            ->then
                ->invoking->format(__FUNCTION__)->on($formatter)
                    ->shouldReturn('should format underscore separated method name');
    }
}

Even the output looks familiar:

Possible output with the atoum/bdd-extension.
class Foo extends atoum\test {

    public function testIsJson ( ) {

        $this
            ->given($string = '{"foo": "bar"}')
            ->then
                ->json($string);
    }

    public function testValidatesSchema ( ) {

        $this
            ->given($string = '["foo", "bar"]')
            ->then
                ->json($string)->validates('{"title": "test", "type": "array"}')
                ->json($string)->validates('/path/to/json.schema');
    }
}
class Foo extends atoum\test {

    public function testFoo ( ) {

        $this->if($regex  = $this->realdom->regex('/[\w\-_]+(\.[\w\-\_]+)*@\w\.(net|org)/'))
             ->and($email = $this->sample($regex))
             ->then
                …
    }
}

Here, we have generated a string based on its regular expression. Reminder, you might have seen this on this blog: Generate strings based on regular expressions.

Fun fact: the atoum/json-schema-extension is tested with atoum obviously and… atoum/praspel-extension!

Conclusion

atoum has a bright future with exciting features! We sincerely hope this new direction will gather existing and new contributors 😄.

❤️ open-source!

Generate strings based on regular expressions

During my PhD thesis, I have partly worked on the problem of the automatic accurate test data generation. In order to be complete and self-contained, I have addressed all kinds of data types, including strings. This article is the first one of a little series that aims at showing how to generate accurate and relevant strings under several constraints.

What is a regular expression?

We are talking about formal language theory here. In the known world, there are four kinds of languages. More formally, in 1956, the Chomsky hierarchy has been formulated, classifying grammars (which define languages) in four levels:

  1. unrestricted grammars, matching langages known as Turing languages, no restriction,
  2. context-sensitive grammars, matching contextual languages,
  3. context-free grammars, matching algebraic languages, based on stacked automata,
  4. regular grammars, matching regular languages.

Each level includes the next level. The last level is the “weaker”, which must not sound negative here. Regular expressions are used often because of their simplicity and also because they solve most problems we encounter daily.

A regular expression is a small language with very few operators and, most of the time, a simple semantics. For instance ab(c|d) means: a word (a data) starting by ab and followed by c or d. We also have quantification operators (also known as repetition operators), such as ?, * and +. We also have {x,y} to define a repetition between x and y. Thus, ? is equivalent to {0,1}, * to {0,} and + to {1,}. When y is missing, it means \displaystyle +\infty , so unbounded (or more exactly, bounded by the limits of the machine). So, for instance ab(c|d){2,4}e? means: a word starting by ab, followed 2, 3 or 4 times by c or d (so cc, cd, dc, ccc, ccd, cdc and so on) and potentially followed by e.

The goal here is not to teach you regular expressions but this is kind of a tiny reminder. There are plenty of regular languages. You might know POSIX regular expression or Perl Compatible Regular Expressions (PCRE). Forget the first one, please. The syntax and the semantics are too much limited. PCRE is the regular language I recommend all the time.

Behind every formal language there is a graph. A regular expression is compiled into a Finite State Machine (FSM). I am not going to draw and explain them, but it is interesting to know that behind a regular expression there is a basic automaton. No magic.

Why focussing regular expressions?

This article focuses on regular languages instead of other kind of languages because we use them very often (even daily). I am going to address context-free languages in another article, be patient young padawan. The needs and constraints with other kind of languages are not the same and more complex algorithms must be involved. So we are going easy for the first step.

Understanding PCRE: lex and parse them

The Hoa\Compiler library provides both \displaystyle LL(1) and \displaystyle LL(k) compiler-compilers. The documentation describes how to use it. We discover that the \displaystyle LL(k) compiler comes with a grammar description language called PP. What does it mean? It means for instance that the grammar of the PCRE can be written with the PP language and that Hoa\Compiler\Llk will transform this grammar into a compiler. That’s why we call them “compiler of compilers”.

Fortunately, the Hoa\Regex library provides the grammar of the PCRE language in the hoa://Library/Regex/Grammar.pp file. Consequently, we are able to analyze regular expressions written in the PCRE language! Let’s try in a shell at first with the hoa compiler:pp tool:

$ echo 'ab(c|d){2,4}e?' | hoa compiler:pp hoa://Library/Regex/Grammar.pp 0 --visitor dump
>  #expression
>  >  #concatenation
>  >  >  token(literal, a)
>  >  >  token(literal, b)
>  >  >  #quantification
>  >  >  >  #alternation
>  >  >  >  >  token(literal, c)
>  >  >  >  >  token(literal, d)
>  >  >  >  token(n_to_m, {2,4})
>  >  >  #quantification
>  >  >  >  token(literal, e)
>  >  >  >  token(zero_or_one, ?)

We read that the whole expression is composed of a single concatenation of two tokens: a and b, followed by a quantification, followed by another quantification. The first quantification is an alternation of (a choice betwen) two tokens: c and d, between 2 to 4 times. The second quantification is the e token that can appear zero or one time. Pretty simple.

The final output of the Hoa\Compiler\Llk\Parser class is an Abstract Syntax Tree (AST). The documentation of Hoa\Compiler explains all that stuff, you should read it. The \displaystyle LL(k) compiler is cut out into very distinct layers in order to improve hackability. Again, the documentation teach us we have four levels in the compilation process: lexical analyzer, syntactic analyzer, trace and AST. The lexical analyzer (also known as lexer) transforms the textual data being analyzed into a sequence of tokens (formally known as lexemes). It checks whether the data is composed of the good pieces. Then, the syntactic analyzer (also known as parser) checks that the order of tokens in this sequence is correct (formally we say that it derives the sequence, see the Matching words section to learn more).

Still in the shell, we can get the result of the lexical analyzer by using the --token-sequence option; thus:

$ echo 'ab(c|d){2,4}e?' | hoa compiler:pp hoa://Library/Regex/Grammar.pp 0 --token-sequence
  #  …  token name   token value  offset
-----------------------------------------
  0  …  literal      a                 0
  1  …  literal      b                 1
  2  …  capturing_   (                 2
  3  …  literal      c                 3
  4  …  alternation  |                 4
  5  …  literal      d                 5
  6  …  _capturing   )                 6
  7  …  n_to_m       {2,4}             7
  8  …  literal      e                12
  9  …  zero_or_one  ?                13
 10  …  EOF                           15

This is the sequence of tokens produced by the lexical analyzer. The tree is not yet built because this is the first step of the compilation process. However this is always interesting to understand these different steps and see how it works.

Now we are able to analyze any regular expressions in the PCRE format! The result of this analysis is a tree. You know what is fun with trees? Visiting them.

Visiting the AST

Unsurprisingly, each node of the AST can be visited thanks to the Hoa\Visitor library. Here is an example with the “dump” visitor:

use Hoa\Compiler;
use Hoa\File;

// 1. Load grammar.
$compiler = Compiler\Llk\Llk::load(
    new File\Read('hoa://Library/Regex/Grammar.pp')
);

// 2. Parse a data.
$ast      = $compiler->parse('ab(c|d){2,4}e?');

// 3. Dump the AST.
$dump     = new Compiler\Visitor\Dump();
echo $dump->visit($ast);

This program will print the same AST dump we have previously seen in the shell.

How to write our own visitor? A visitor is a class with a single visit method. Let’s try a visitor that pretty print a regular expression, i.e. transform:

ab(c|d){2,4}e?

into:

a
b
(
    c
    |
    d
){2,4}
e?

Why a pretty printer? First, it shows how to visit a tree. Second, it shows the structure of the visitor: we filter by node ID (#expression, #quantification, token etc.) and we apply respective computations. A pretty printer is often a good way for being familiarized with the structure of an AST.

Here is the class. It catches only useful constructions for the given example:

use Hoa\Visitor;

class PrettyPrinter implements Visitor\Visit {

    public function visit ( Visitor\Element $element,
                            &$handle = null,
                            $eldnah  = null ) {

        static $_indent = 0;

        $out    = null;
        $nodeId = $element->getId();

        switch($nodeId) {

            // Reset indentation and…
            case '#expression':
                $_indent = 0;

            // … visit all the children.
            case '#quantification':
                foreach($element->getChildren() as $child)
                    $out .= $child->accept($this, $handle, $eldnah);
              break;

            // One new line between each children of the concatenation.
            case '#concatenation':
                foreach($element->getChildren() as $child)
                    $out .= $child->accept($this, $handle, $eldnah) . "\n";
              break;

            // Add parenthesis and increase indentation.
            case '#alternation':
                $oout = [];

                $pIndent = str_repeat('    ', $_indent);
                ++$_indent;
                $cIndent = str_repeat('    ', $_indent);

                foreach($element->getChildren() as $child)
                    $oout[] = $cIndent . $child->accept($this, $handle, $eldnah);

                --$_indent;
                $out .= $pIndent . '(' . "\n" .
                        implode("\n" . $cIndent . '|' . "\n", $oout) . "\n" .
                        $pIndent . ')';
              break;

            // Print token value verbatim.
            case 'token':
                $tokenId    = $element->getValueToken();
                $tokenValue = $element->getValueValue();

                switch($tokenId) {

                    case 'literal':
                    case 'n_to_m':
                    case 'zero_or_one':
                        $out .= $tokenValue;
                       break;

                    default:
                        throw new RuntimeException(
                            'Token ID ' . $tokenId . ' is not well-handled.'
                        );
                }
              break;

            default:
                throw new RuntimeException(
                    'Node ID ' . $nodeId . ' is not well-handled.'
                );
        }

        return $out;
    }
}

And finally, we apply the pretty printer on the AST like previously seen:

$compiler    = Compiler\Llk\Llk::load(
    new File\Read('hoa://Library/Regex/Grammar.pp')
);
$ast         = $compiler->parse('ab(c|d){2,4}e?');
$prettyprint = new PrettyPrinter();
echo $prettyprint->visit($ast);

Et voilà !

Now, put all that stuff together!

Isotropic generation

We can use Hoa\Regex and Hoa\Compiler to get the AST of any regular expressions written in the PCRE format. We can use Hoa\Visitor to traverse the AST and apply computations according to the type of nodes. Our goal is to generate strings based on regular expressions. What kind of generation are we going to use? There are plenty of them: uniform random, smallest, coverage based…

The simplest is isotropic generation, also known as random generation. But random says nothing: what is the repartition, or do we have any uniformity? Isotropic means each choice will be solved randomly and uniformly. Uniformity has to be defined: does it include the whole set of nodes or just the immediate children of the node? Isotropic means we consider only immediate children. For instance, a node #alternation has \displaystyle c^1 immediate children, the probability \displaystyle C to choose one child is:

\displaystyle P(C) = \frac{1}{c^1}

Yes, simple as that!

We can use the Hoa\Math library that provides the Hoa\Math\Sampler\Random class to sample uniform random integers and floats. Ready?

Structure of the visitor

The structure of the visitor is the following:

use Hoa\Visitor;
use Hoa\Math;

class IsotropicSampler implements Visitor\Visit {

    protected $_sampler = null;

    public function __construct ( Math\Sampler $sampler ) {

        $this->_sampler = $sampler;

        return;
    }

    public function visit ( Visitor\Element $element,
                            &$handle = null,
                            $eldnah  = null ) {

        switch($element->getId()) {

            // …
        }
    }
}

We set a sampler and we start visiting and filtering nodes by their node ID. The following code will generate a string based on the regular expression contained in the $expression variable:

$expression  = '…';
$ast         = $compiler->parse($expression);
$generator   = new IsotropicSampler(new Math\Sampler\Random());
echo $generator->visit($ast);

We are going to change the value of $expression step by step until having ab(c|d){2,4}e?.

Case of #expression

A node of type #expression has only one child. Thus, we simply return the computation of this node:

case '#expression':
    return $element->getChild(0)->accept($this, $handle, $eldnah);
  break;

Case of token

We consider only one type of token for now: literal. A literal can contain an escaped character, can be a single character or can be . (which means everything). We consider only a single character for this example (spoil: the whole visitor already exists). Thus:

case 'token':
    return $element->getValueValue();
  break;

Here, with $expression = 'a'; we get the string a.

Case of #concatenation

A concatenation is just the computation of all children joined in a single piece of string. Thus:

case '#concatenation':
    $out = null;

    foreach($element->getChildren() as $child)
        $out .= $child->accept($this, $handle, $eldnah);

    return $out;
  break;

At this step, with $expression = 'ab'; we get the string ab. Totally crazy.

Case of #alternation

An alternation is a choice between several children. All we have to do is to select a child based on the probability given above. The number of children for the current node can be known thanks to the getChildrenNumber method. We are also going to use the sampler of integers. Thus:

case '#alternation':
    $childIndex = $this->_sampler->getInteger(
        0,
        $element->getChildrenNumber() - 1
    );

    return $element->getChild($childIndex)
                   ->accept($this, $handle, $eldnah);
  break;

Now, with $expression = 'ab(c|d)'; we get the strings abc or abd at random. Try several times to see by yourself.

Case of #quantification

A quantification is an alternation of concatenations. Indeed, e{2,4} is strictly equivalent to ee|eee|eeee. We have only two quantifications in our example: ? and {x,y}. We are going to find the value for x and y and then choose at random between these bounds. Let’s go:

case '#quantification':
    $out = null;
    $x   = 0;
    $y   = 0;

    // Filter the type of quantification.
    switch($element->getChild(1)->getValueToken()) {

        // ?
        case 'zero_or_one':
            $y = 1;
          break;

        // {x,y}
        case 'n_to_m':
            $xy = explode(
                ',',
                trim($element->getChild(1)->getValueValue(), '{}')
            );
            $x  = (int) trim($xy[0]);
            $y  = (int) trim($xy[1]);
          break;
    }

    // Choose the number of repetitions.
    $max = $this->_sampler->getInteger($x, $y);

    // Concatenate.
    for($i = 0; $i < $max; ++$i)
        $out .= $element->getChild(0)->accept($this, $handle, $eldnah);

    return $out;
  break;

Finally, with $expression = 'ab(c|d){2,4}e?'; we can have the following strings: abdcce, abdc, abddcd, abcde etc. Nice isn’t it? Want more?

for($i = 0; $i < 42; ++$i)
    echo $generator->visit($ast), "\n";

/**
 * Could output:
 *     abdce
 *     abdcc
 *     abcdde
 *     abcdcd
 *     abcde
 *     abcc
 *     abddcde
 *     abddcce
 *     abcde
 *     abcc
 *     abdcce
 *     abcde
 *     abdce
 *     abdd
 *     abcdce
 *     abccd
 *     abdcdd
 *     abcdcce
 *     abcce
 *     abddc
 */

Performance

This is difficult to give numbers because it depends of a lot of parameters: your machine configuration, the PHP VM, if other programs run etc. But I have generated 1 million ( \displaystyle 10^6 ) strings in less than 25 seconds on my machine (an old MacBook Pro), which is pretty reasonable.

Time (in milliseconds) to generate a certain number of strings (log-scaled).

Conclusion and surprise

So, yes, now we know how to generate strings based on regular expressions! Supporting all the PCRE format is difficult. That’s why the Hoa\Regex library provides the Hoa\Regex\Visitor\Isotropic class that is a more advanced visitor. This latter supports classes, negative classes, ranges, all quantifications, all kinds of literals (characters, escaped characters, types of characters —\w, \d, \h…—) etc. Consequently, all you have to do is:

use Hoa\Regex;

// …
$generator = new Regex\Visitor\Isotropic(new Math\Sampler\Random());
echo $generator->visit($ast);

This algorithm is used in Praspel, a specification language I have designed during my PhD thesis. More specifically, this algorithm is used inside realistic domains. I am not going to explain it today but it allows me to introduce the “surprise”.

Generate strings based on regular expressions in atoum

atoum is an awesome unit test framework. You can use the Atoum\PraspelExtension extension to use Praspel and therefore realistic domains inside atoum. You can use realistic domains to validate and to generate data, they are designed for that. Obviously, we can use the Regex realistic domain. This extension provides several features including sample, sampleMany and predicate to respectively generate one datum, generate many data and validate a datum based on a realistic domain. To declare a regular expression, we must write:

$regex = $this->realdom->regex('/ab(c|d){2,4}e?/');

And to generate a datum, all we have to do is:

$datum = $this->sample($regex);

For instance, imagine you are writing a test called test_mail and you need an email address:

public function test_mail ( ) {

    $this
        ->given(
            $regex   = $this->realdom->regex('/[\w\-_]+(\.[\w\-\_]+)*@\w\.(net|org)/'),
            $address = $this->sample($regex),
            $mailer  = new \Mock\Mailer(),
        )
        ->when($mailer->sendTo($address))
        ->then
            ->}

Easy to read, fast to execute and help to focus on the logic of the test instead of test data (also known as fixtures). Note that most of the time the regular expressions are already in the code (maybe as constants). It is therefore easier to write and to maintain the tests.

I hope you enjoyed this first part of the series :-)! This work has been published in the International Conference on Software Testing, Verification and Validation: Grammar-Based Testing using Realistic Domains in PHP.