# A first equation

Just to test the Simple MathJax plugin. And as it’s a first, it’d better be a good one.
$\imath\hbar\frac{\partial}{\partial t} \Phi (x, t) = \hat{H} \Phi (x, t)$

Seems to work nicely so far, so some more mathematical posts are surely in order…

Equations can now be entered in site comments using LaTeX notation placed within  delimiters (for inline equations) and  delimiters (for ‘displayed’ equations as above, centred on their own individual line).

So this: $e^{i\pi} + 1 = 0$ produces this: $e^{i\pi} + 1 = 0$. And this:

$\varphi = \frac{1 + \sqrt{5}}{2}$

… produces this:
$\varphi = \frac{1 + \sqrt{5}}{2}$

# Responsive web design (in several senses)

In a nice follow-up, my post on random sampling with D was highlighted on Reddit, where content-wise it seems to have been very much appreciated.

Unfortunately it also uncovered a serious problem with the site’s readability on mobile phones. So, cue some reading up on responsive web design and some intensive tweaking of the site’s CSS. This should hopefully improve performance and usability of the site across all devices and screen sizes.

• Progressive tweaks to layout design as the screen width shrinks, designed to ensure that there are never clashes between page elements.
• Removal of the embedded web fonts. Much as I love the Linux Libertine and Linux Biolinum font families, having them embedded into the site was costing lots of bandwidth and meant the site loaded very slowly indeed on some browsers (e.g. Google Chrome would need to wait for the font to download before displaying any text). Linux Libertine and Linux Biolinum are still requested by the CSS, so if you have them on your machine you’ll still see the best face of the design; but I’ve made careful secondary font choices that are likely to cover all platforms and will still keep the site looking good.
• The use of SVG files for the site’s stylistic graphics. These are smaller than the PNG files used previously and should work on all modern browsers. They also allow me to size all elements of the page in relative terms rather than being tied to a particular resolution. (Actually, I did try doing this ages ago in an earlier iteration of this site, but abandoned it because browser support was not widespread enough. Fortunately things have changed…)

There remain a couple of niggles. The site’s drop-down menus seem unsuitable for use on touchscreens; I’ll have to do further research into an appropriate solution here. [Update 2012-07-27: should be fixed now.] There’s also a seeming possibility that the top-of-screen search box will either wrap or otherwise clash with other page elements on the smallest screens. Nevertheless, this website should now be substantially more usable on small-screen devices.

Feedback on the effect of these changes is welcome, as is advice on how to iron out any niggles :-)

# Sampling D

In other happy news (though not quite on the scale of the Higgs boson), my first contribution has been accepted to D’s standard library, Phobos. My patches implement a much faster random sampling algorithm, and also fix a couple of bugs that were present in the pre-existing random sampling functions.

Random sampling is a fun problem, which is addressed by Donald Knuth in chapter 3 (vol. 2) of The Art of Computer Programming. Here’s the essence of it: suppose you have a collection of N items, which without loss of generality we might label 1, 2, …, N or alternatively 0, 1, …, N – 1. You want to take a random subset of these items of size n < N, with the probability of picking any given record being equal to the chance of picking any other given record.

Probably the simplest way to do this is to randomly select items from the original set with probability n/N, keeping them if they have not already been selected, reselecting otherwise, and continuing until you have selected n in total. This has two problems, however: first, we have to store the already-selected items so that we can compare them with new selections, and second, with each new sample point, we have to make an increasing number of comparisons and hence run an increased risk of having to reselect the chosen item.

This doesn’t matter too much if we’re selecting only 3 points from 10, say, but suppose we want to select 1000 points out of 100,000, or 100,000 out of 1 million? It’s also problematic if you’re trying to sample records from a slow-access storage device like a tape, where it’s costly to jump back and forth between different items. We need a method which scales more elegantly in terms of both storage space and algorithmic complexity.

What if, instead, we sequentially consider each item and randomly decide whether or not it is in the sample? We can’t now use n/N as the probability to select — we’d get a sample that was only on average of size n — so we need to find an alternative rule which allows us to do this sequential walk and pick exactly n points without biasing the selection process.

### Algorithm S

This is a problem which was solved in 1962 in two independently-written papers by Fan et al. and Jones, and their solution is referred to by Knuth as Algorithm S. It rests on a simple principle: the probability of picking a given item is the number of sample points left to select divided by the total number of items still remaining to consider. So, if you have so far selected m sample points and you’re on the (t + 1)st record, the chance of selecting it will be (nm)/(Nt). Once you’ve selected exactly n points, you exit the sampling process. Here’s a simple example, where we try and select 2 points out of 5:

This algorithm has a lot of nice features. It’s entirely sequential, so you could run it on a tape of records without trouble (very important back in 1962). It’s very simple to implement. On the other hand, although the worst-case scenario of considering every item is usually avoided, it’s still got an algorithmic complexity of O(N) and on average requires O(N) random variates to be generated. Again, not a big deal if you’re sampling 10 from 100, but what if you’re trying to sample 1000 from 10 million?

The problem is we have to consider each item one by one, 1, 2, 3, …, and generate a new random variate (computationally expensive!) for each item we consider. What would be much nicer is to instead jump straight to the selected record, skipping over the others. In other words, what we’d like to do is generate a ‘skip size’ S of records to ignore (which might be 0), select the record we land on, and then skip again from the following record, and so on until the sample is complete. Like this:

For this to be really worthwhile, we’d like to generate only one random variate per skip (meaning the total number scales with the sample size rather than the total number of items) and for S to be generated from that random variate in O(1) complexity. At the same time, we need to ensure that the way we generate S preserves the probabilistic character that we are looking for, i.e. that each record has an equal chance of being in the chosen sample.

### Algorithms A and D

A solution to this problem was provided in 1984 by Jeffrey Scott Vitter in a paper entitled Faster methods for random sampling, which introduced several new algorithms with superior scaling properties. Improved versions of the two most effective algorithms were introduced in a later article, An efficient algorithm for sequential random sampling (free access to both these papers is available via Prof. Vitter’s online library). An in-depth presentation of all the details of these two algorithms probably needs a whole post in itself, but we can give a rough overview as follows.

The first of these algorithms, Algorithm A, derives from some simple observations on properties of the distribution function F(s) of skip sizes (for now I won’t give the mathematical details here as I’m currently missing a good solution for mathematical notation, but Vitter’s 1984 paper covers this in depth). Vitter used these properties to derive a simple inequality between N, n, s, and the value of a randomly-generated number V in the open interval (0, 1). The skip size S is then equal to the minimum value of s that satisfies this inequality for a given randomly-generated value of V. In other words, we need generate only 1 random variate per sample point, or n total, compared with O(N) for Algorithm S.

The downside to the simplicity of this approach is that the solution to the inequality is calculated inefficiently: the algorithm still has an overall complexity of O(N). This is still an improvement on Algorithm S, and one that can be optimal for cases where O(n) and O(N) are pretty much equivalent (e.g. selecting 20 out of 100, say).

Algorithm D takes advantage of this by using Algorithm A’s approach where the ratio (nm)/(Nt) is larger than a certain value, but implements an extended approach that solves the computational complexity issue for the case where n is smaller. In its most basic form it relies on generating two random variates per sample point, the first, U, uniformly distributed in (0, 1), and the second, X, from a distribution closely approximating the skip size distribution F(s). Again, Vitter derives an inequality between these values that the skip size must satisfy; but in addition he observes that there is a simpler-to-calculate inequality, which is satisfied with high probability, that guarantees compliance. In the case of either of these inequalities being satisfied, the skip size S is set equal to the ‘floor’ of X, i.e. the largest integer less than X.

So, the skip size S is calculated roughly as follows:

1. Check if the ratio (nm)/(Nt) is sufficiently large; if so, use Algorithm A to generate this skip size.
2. Generate the random variables U and X.
3. Check the simple inequality; if satisfied (likely!), set S equal to floor(X), and return this value.
4. Check the more complicated inequality; if satisfied, set set S equal to floor(X), and return this value. If not satisfied (very rare!) return to Step 2, and repeat until a satisfactory value is generated.

This basic structure can be improved on in several ways. In practice, once the ratio (nm)/(Nt) has exceeded the threshold once, we can switch over to Algorithm A for the entire remainder of the sampling; the supposed extra cost of Algorithm A is generally offset by a reduction in the number of times we have to calculate and compare the value of the ratio. This also helps to guard against worst-case scenario behaviour of Algorithm D when (nm) ≈ (Nt).

Second, Vitter shows that it’s possible in many cases to deterministically calculate the next value of X from the previous one, without violating the independence of the sequence of random values. This allows a sizeable reduction in the number of random variates required, so that in practice the total number is closer to n than 2n.

Finally, a trivial improvement is to consider separately the case where only 1 sample point remains to be selected — where of course all one has to do is generate a uniformly-distributed integer to calculate the skip across the remaining items.

### Getting it into D

The D framework for random sampling, written by Andrei Alexandrescu, is a rather elegant creation (worth its own blog post) that uses D’s concept of ranges to create a lazily-evaluated structure that can sample from a broad range of different inputs — an array, a sequence of numbers generated automatically without being stored, an input file, essentially any input that can be interpreted through D’s concept of an InputRange.

I’d originally implemented Algorithms A and D as a little extension library based on the GNU Scientific Library, and it was fairly trivial to rework the code for D (licence lawyers note that the pieces reworked were entirely my own work and not derivative of GSL). To my knowledge, this makes D the only language — certainly the only systems-programming language — to have Vitter’s algorithm implemented in its standard library. I’m not even aware of any mainstream scientific library that implements it, though I’d be delighted to learn otherwise.

For now, though, it seems a bit of a unique feature — and I’m definitely proud to have brought Algorithm D to D.

# Higgs boson

There are few things more lovely than the expressions on the face of scientists who have been working many, many years towards a goal, and have just seen it realized. Better by far than sport — it’s a collaborative effort where nobody loses.

Huge congratulations to the ATLAS and CMS teams at CERN and to the many international collaborations that brought about this result, and to all the people who for so long worked on making these projects possible.

# The Glorious 60-Year Rain

I’m not exactly of the royalist persuasion, so what I’m doing down near the Tower of London waiting for the Queen’s Jubilee boat parade can only be explained as some kind of aberration or alternatively just the desire to be able to say ‘I woz there’ to people who will probably be annoyed with me if I miss the opportunity. Or perhaps a forelock-tucking vestige of personality remains deep-down; deary me, how we Brits are trained to bob and curtsey before a bit of pageantry and a title or two.

The night before saw glorious rain and today is no different: the crowd gathered in and around Tower Hill Terrace and Petty Wales huddle together under a grey, overcast sky with umbrellas close to hand. It’s crowded but not painfully so; many people seem to be moving around trying to work out where the best place is to see from; the riverbank here is closed off to all except a select few. There’s plenty of flags being waved, many of them decorated with the logos of OK!, Hello magazine, or the Daily Mirror, which seems somehow at the same time both awful and awfully appropriate. A few people are dressed in costume; the one that sticks in mind is the young woman both dressed and made up in red, white and blue, whose white-painted face makes her look either like a sad clown or an 18th century rake.

Everyone has turned up with much time to go, and is eagerly standing on tiptoe to peer at the cold, brown river just in case anything happens. Nothing will for quite some time, but people are peering anyway. Even I’m doing this, which is daft; there’s a bunch of artillery guns down on the riverside which are obviously there for a salute, and those will surely go off before any boats go past. A wicked streak of imagination rises up as I picture these guns being mis-fired: wouldn’t it be the most truly British way of doing things, in the best traditions of Dad’s Army, to finish off the Jubilee parade by accidentally sending the Queen’s boat to the bottom of the Thames?

As the rain finally starts to fall, umbrellas surface all across the crowd, followed swiftly by cries of ‘We can’t see! Get yer brollies down!’ The latter turns into a refrain: ‘Get yer brollies down! GET YER BROLLIES DOWN!!’ Slowly people get the message and the forest of umbrellas thins, with cheers greeting each one lowered. A ginger-haired young man in front of me protests good-naturedly, ‘There’s nothing to see!’ But of course, even if there is nothing to see, people want to be able to see it. It all feels much like the Sermon on the Mount scene of Life of Brian, with Jesus at the top of the hill speaking beautiful words while all the way at the back of the crowd people are shouting ‘Speak up!!’

The artillery guns start to fire off, BOOM BOOM BOOM. It goes on for a while before I think to start counting; it sounds like they’re doing a shot for every year of the reign. The rain starts up a bit more heavily, and the umbrellas reappear, together with the return of the refrain: ‘Get yer brollies down! We can’t see bugger all! Bloody selfish!’

When the boats finally start to come past they are at first barely visible; there is an angle through the crowd where you can see through to HMS Belfast and just catch a glimpse of them, and a little while later they can be seen over the heads of the crowds in the open water leading up to Tower Bridge. The second or third boat is bedecked with all the flags of the United Kingdom and at the front stands someone, dressed in what looks like a Beefeater’s uniform, waving to all the crowds; you can see people wondering, sometimes aloud, ‘Was that the Queen?’

More boats, and nothing confirmedly royal. A little girl on her father’s shoulders asks, ‘Can we go home now?’ When she realizes that the Queen hasn’t yet gone past, she starts asking after every boat; and then when she’s told that she can see better than her parents, every boat contains the Queen. More flag-bedecked boats go by; perhaps the flags of the Commonwealth nations?

Suddenly people get excited — there is a vibrant flash of colour visible in front of HMS Belfast, a flag that is unmistakeably the royal standard, and on the boat, a tiny white blob. A Scottish fellow in the crowd behind me is ecstatic: ‘Oh, it’s her, it’s her! Ohhh, it’s so lovely!’ And he dances off into the crowd to get a closer look, though whether he’ll succeed is doubtful; the crowd in front is crushing up as close as they can get to the riverbank, which is still some way distant, and only the people on the towers can possibly really see anything clearly. The boat becomes visible again, heading on towards Tower Bridge, bedecked in its bright royal insignia. There again is the little white blob, though whether it’s really the Queen you couldn’t tell for the life of you. Personally I think that if I’d been on the throne 60 years I wouldn’t want to be here on this rainy, windy day; I’d get a lookalike to do it and sit at home with a cup of cocoa, watching the whole thing on TV.

Rather aptly, the weather decides that it too has had enough of waiting around and chooses this moment to begin the rain in earnest. If people weren’t already starting to wander off, this decides it, with just about everyone turning round and heading for the tube station. We came; we waited; and finally we saw, as a white blob in the far distance, the Queen of England. And it most definitely rained on her parade.