Sniffen Packets

With a name like Sniffen, it's got to smell good.

My First Prolog golorP tsriF yM

I’ve been meaning to learn Prolog for more than ten years. I saw Jon Millen use it brilliantly back at MITRE, with meaningful simulations of cryptographic protocols in just a few dozen lines—but while I could read each line, the insight that allowed their selection and composition escaped me. I’ve used Datalog for plenty; it’s great for exactly the problems for which extensibility and federation make SQL such a pain. Prolog adds all sorts of operational concerns around space usage and termination. Two recent events nudged me to finally learn Prolog: first, a friend posted a Mastermind-style puzzle. Lots of folks solved it, but the conversation got into how many of the clues were necessary. I used the top hit from Googling for “Mastermind Prolog” to play with a solution, but it felt awkward and stiff—lots of boilerplate, way too many words compared to a CLP(FD) solver in Clojure or similar. A week later, Hacker News pointed to a recent ebook on modern Prolog: The Power of Prolog. I knew a little about the old way of doing arithmetic and logic problems in Prolog, with N is X+Y and such; even skimming this book told me that new ways are much better.

I’ve read the first half of that book, and while I definitely still don’t understand DCGs yet, I think I can improve on that old mastermind program.

First, here’s the problem as posed by Jeff: Mastermind Puzzle

To start, we can write:

jcb(Answer) :-
    % 682; 1 right & in place
    % 614; 1 right but wrong place
    % 206; 2 digits right but wrong place
    % 738; all wrong
    % 380; one right but wrong place

This is a succint and straightforward translation of the problem: given a guess and some unknown Answer, the mastermind gives us some number of black and some number of white pegs.

We can write the mastermind program something like this:

mastermind(Guess,Answer,Black,White) :-
    count_blacks(Guess, Answer, Black),
    count_whites(Guess, Answer, N),
    White is N - Black.

layout(X) :- X=[_,_,_].


% check if all elements of a list fulfill certain criteria                                                                                                    
all([H|T],Function) :- call(Function,H),all(T,Function).

Now this isn’t so pretty. Having to list out color(red). color(blue). wouldn’t feel so terrible, but having to list out digits instead of saying digit(X) :- integer(X0, X >= 0, X<=9. seems ridiculous. Having to write my own all/2 also seems ridiculous. And the version I got from the Web went on in this style, even to having lots of cuts—something tells me that’s not right! So let’s get to rewriting.

First, we can use an excellent library for Constraint Logic Programming over Finite Domains. And since we know we’ll eventually want to treat the puzzle constraints as data, let’s make that conversion now:

?- use_module(library(clpfd)).

% Jeff’s specific problem


jcb(Answer,RuleNumbers) :- maplist(jcb_helper(Answer),RuleNumbers).

jcb_helper(Answer,RuleNumber) :- jcb_rules(Rules),

jcb(Answer) :- jcb(Answer,[0,1,2,3,4]).

We can still address the original problem with jcb(A)., and indeed that’s a bunch of what I repeated while debugging as I transformed the program.

The core mastermind program has only a couple changes: Answer moves to the last argument, for easier use with call and such. The last line and the digits predicate change to use CLP(FD) constraints.

% How to play Mastermind

mastermind(Guess,Black,White,Answer) :-
    count_blacks(Guess, Answer, Black),
    count_whites(Guess, Answer, N),
    N #= White + Black.


digits(X) :- X ins 0..9.

Already I like this better: it’s shorter and it is more useful, because the program runs in multiple directions!

Now let’s look into how count_blacks and count_whites work. The first is a manual iteration over a guess and an answer. In Haskell I’d write this as something like countBlacks = length . filter id . zipWith (==), I suppose—though that would only compute one way. This can compute the number of black pegs from a guess and an answer, or the constraints on an answer from a guess & a number of black pegs, or similarly for constraints on a guess given an answer and black pegs.

count_blacks([H1|T1], [H2|T2], Cnt2) :- H1 #= H2,
					Cnt2 #= Cnt1+1,
count_blacks([H1|T1], [H2|T2], Cnt) :- H1 #\= H2,

It does the equality and the addition by constraints, which I’d hoped meant the solver could propagate from the puzzle input (number of pegs) to constraints on what the pegs are. In practice it seems to backtrack on those—I haven’t seen an intermediate state offering A = 3 \/ 5 \/ 7.

count_whites has to handle all the reordering and counting. There’s a library function to do that with constraints, global_cardinality. All the stuff with pairs and folds is just to get data in and out of the list-of-pairs format used by global-cardinality. That function also requires that the shape of the cardinality list be specifies, so numlist is there to make it 9 elements long.

count_whites(G,A,N) :- numlist(0,9,Ds),
		       pairs_keys(Gcard,Ds), pairs_keys(Acard,Ds),
		       pairs_values(Gcard,Gvals), pairs_values(Acard,Avals),
		       global_cardinality(G,Gcard), global_cardinality(A,Acard),

mins_(Gval,Aval,V0,V) :- V #= V0 + min(Gval,Aval).

What’s above is enough to solve the problem in the image! But my friends’ conversation quickly turned to whether some rules were superfluous. Because library(clpfd) has good reflection facilities, we can quickly program something to try subsets of rules, showing us only those that fully solve the problem. This isn’t a constraint program; it’s ordinary Prolog-style backtracking search. For five rules, it tries \(2^5=32\) possibilities. It’s slow enough that I notice a momentary pause while it runs, even with only five rules!

First, it’s weird that there’s no powerset library function. Maybe I’m missing it?

% What’s the shortest set of constraints that actually solves it?

powerset([], []).
powerset([_|T], P) :- powerset(T,P).
powerset([H|T], [H|P]) :- powerset(T,P).

This uses a weird Prolog predicate, findall, to collect all the answers that would be found from backtracking the search above, with all possible rule sets. One of Prolog’s superpowers is that it handles lots of things in a “timelike” way, by backtracking at an interactive prompt. When you want to program over those outputs, you either let the backtracking naturally thread through, or you use findall to collect them into a list.

no_hanging_goals filters for only those that solve the problem—“hanging goals” annotate variables that have constraints but no solution. It’s a bit of a hack with copy_term, but it’s documented at the manual page that you can copy from X to X if you just want to look at the constraint annotations without really copying the term.

which_rules(Answers) :-

no_hanging_goals(X) :- copy_term(X,X,[]).

Last, we use findall again to collect all the cases of rules that work with which_rules, sort by length, and extract the set with the shortest rules.

shortest_rules(Shortest) :- findall(L-R,(which_rules([R-_]),
					 L > 0),

Indeed, it confirms that the first three rules are the shortest set:

bts@Atelier ~/s/mastermind ❯❯❯ swipl
Welcome to SWI-Prolog (threaded, 64 bits, version 8.0.3)
SWI-Prolog comes with ABSOLUTELY NO WARRANTY. This is free software.
Please run ?- license. for legal details.

For online help and background, visit
For built-in help, use ?- help(Topic). or ?- apropos(Word).

?- jcb(A).
A = [0, 4, 2] ;

?- shortest_rules(Rs).
Rs = [[0, 1, 2]].

You can try it yourself! Grab the source from Github, install SWI Prolog, and let me know what you find.


Zampolit revisited

Some time in 2011, I built a little toy program called Zampolit. Its job is to support stories about who’s contributing to a shared project, and how the team is making progress towards a shared goal. I learned a bunch of my collaborative writing habits from the MIT Assassins’ Guild, a group of LARPers. A typical 60-person 10-day game is about half a million words, produced by a team of about five people over the course of about a year. Because most of the teams producing these works are producing their first such work, most teams dramatically mis-estimate their future rate of production and their chances of producing in future differently than they have in the recent past. But because they’re MIT LARPers, they produce their work using an object-oriented variant of TeX1 and coordinate using git.

The people who make the schedule need to know whether a game will actually be ready. They could ask the team, but the team’s probably wrong—and has identity-based reasons to support their own confusion. The schedulers therefore appoint a zampolit, named for the Soviet political officers of the later twentieth century. The zampolit is responsible for learning as much about the game as one of its authors, and then for telling the schedulers honestly how it’s going. We do have stories of zampolits “going native” and joining the group delusion of the authors that the work will be done on time, of course. So there are a handful of reasons for tools to support their work:

  • To help the zampolit be honest with themselves;
  • To help clearly communicate to the team, so they can understand an honest assessment of progress;
  • To show who’s doing the work, and who will have to change how much to make a different contribution;
  • To inform the schedulers about not only whether the team will likely make its deadlines, but how plausible that is.

What could be better for this than a detailed graph of all the existing contributions? The old Zampolit was an attempt to provide this.. but if you look at the code, you’ll see that it’s really weird. It’s not just that it’s in Haskell. It’s using Haskell as a variant shell script using the HSH library. Is it enough to say that the author of HSH maintains the largest remaining Gopher Hole on the Internet? HSH doesn’t build cleanly with any modern version of Haskell. The old Zampolit is also slow: it iterates through the git history of a project, checking out every revision—so it has to interact with the file system a lot.

I’ve reconsidered how to approach a program like this in the last ten years, so today I rewrote Zampolit to run in a fraction of the time—through not touching the file system—and in less than a hundred lines of mixed Shell, Awk, and GNUplot. I haven’t yet figured out a great way to distribute this, but for now you can just unpack a tar file into your git repository and then commit & track it along with your work; maybe some day I’ll work out a way to ship this better.

For now, let’s look at the program. The top level is a shell program. It figures out where we’re working, then produces a list of every commit. For each commit, it emits the author email, timestamp, commit hash (not used thereafter, but helpful for debugging), and size of the word diff.

MYDIR=`git rev-parse --show-toplevel`/zampolit

git rev-list --pretty="%ae %at %H" master \
    | grep -v "^commit" \
    | while read a t c; do
    echo -n "$a $t $c ";
    git show -p --word-diff=porcelain $c \
        | grep -e '^[-+][^-+]' \
        | wc -w; \
        done \
          >| $MYDIR/

gnuplot -e "mydir='$MYDIR'" $MYDIR/zampolit.gnuplot && rm $MYDIR/ && evince zampolit.pdf

Then it invokes GNUplot. GNUplot reads the data file twice, each time using an Awk script to interpret it. This version smooths the data with smooth cumulative, but there’s no reason you couldn’t do away with that. The noenhanced bit is necessary to keep GNUplot from mis-interpreting @ in email addresses as some sort of negative space character—GNUplot is weird software too!

set datafile separator ","
stats '<awk -f '.mydir.'/zampolit.awk '.mydir.'/' using 1 nooutput

set xdata time
set timefmt "%s"
set format x "%m/%d/%Y"
set terminal pdfcairo size 10in,7.5in
set output "zampolit.pdf"
set xlabel "date"
set ylabel "words"
set title "word counts by author"
set datafile missing "?"
set key autotitle columnhead noenhanced
plot for [i=2:STATS_columns] '<awk -f '.mydir.'/zampolit.awk
'.mydir.'/' using 1:i smooth cumulative with linespoints

This Awk program is mostly a gift from Tom Fenech at Stack Overflow. Thanks!

# repeat comma n times
function r(n) { 
    for (j = 0; j < n; ++j) 
    s = s ","
    return s 

    header = "time"

# add each element to array, indexed on name & timestamp
# drop $3, the hash value
{ a[$1,$2] = $4 }

# register any new names seen in column one
!seen[$1] { 
    seen[$1] = ++c
    header = header "," $1

    print header
    for (i in a) {
        # the index $1,$2 from above is separated by 
        # the built-in variable SUBSEP                  
        split(i, b, SUBSEP)
        # now b[1] is the name (car1 or car2)
        # which is used to determine how many commas
        # b[2] is the x value
        # a[i] is the y value
        printf "%s%s%s\n", b[2], r(seen[b[1]]), a[i] 

Anyway, it looks like all that’s needed now is to drop those files into project/zampolit/, run, and you should have a quick assessment of who’s working and how. Please let me know if it’s helpful to you, or if some feature would make this better.

  1. object systems seem invented for adventure games, after all↩︎

Party games with video chat in the time of quarantine

We tried family Jackbox last night over Zoom. Lessons:

  1. Set the “safe for kids” mode unless you want to see Grandma draw

  2. Set the “extended timeouts”; most Jackbox games have some toggles labelled as “recommended only for streaming games”; this is what they mean it for. Latency for the web apps will be high.

  3. Turn down the volume, at least for music & sfx. Zoom’s sharing of the sound from the game is much much louder than the voices.

Every household needs one device to run Zoom & see the “main screen,” optionally another for video chat close up. Every person needs a device to run the client web app. The host needs a real computer to run Jackbox + Zoom, sharing the screen. We used the Steam Link app to get the main screen on an apple TV. That had its own problems: audio feedback with the iPad in the room (also on the Zoom chat) was minor, but the use of the Apple Remote as a trackpad was nearly impossible.

For next time, I guess I’ll try adding a Steam Remote to the AppleTV—maybe its trackpad will be more usable. But maybe it’s a protocol issue with getting fine grained trackpad support through an AppleTV, and I’ll have to move a hardware Steam Link (sadly discontinued) down there.

tech, games

The best K-cup I've ever had

I work in an office that provides free coffee. Some floors have Keurig machines, some have Flavia, some have fancier grind-and-brew machines. They range from acceptable (the grind-and-brew) to… less than acceptable (most Keurig) to illness-inducing (Flavia). I keep an aeropress in the office kitchen, and a decent burr grinder, and bring in my own beans—that can pretty reliably hit my standard for decent coffee. The down sides are the time—no matter how nice the ritual feels, sometimes I don’t have 15 minutes to make a cup of coffee—plus the noise and the fussiness of keeping the setup clean.

A year ago a friend told me about his new work with a revolutionary coffee business aiming to deliver excellent coffee for office setups. Cometeeer is shipping K-cups that make coffee “as good as I can make at home”. Given what I make at home, that’s quite a claim! My first shipment showed up from them today; they’re apparently hitting a beta phase.

What you get

Cometeer boxes of pods

For $60, I got a box of 32 aluminum K-cups: four boxes, each holding eight pods. Unlike typical K-cups, these are pure aluminum shells—no plastic, so totally recyclable. Inside each is a frozen coffee extract, not the normal fine-ground coffee & filter of a K-cup. The whole thing comes Fedex Ground with a block of dry ice to keep it solid. That made for a great afternoon of homeschool science class—nice timing, Cometeer!

If you have a K-cup machine, presumably there’s some way to thaw this just a bit and then have the machine spike and brew it; for my use I just peel open the pod, drop the frozen puck inside into a mug, and add eight ounces of 200 °F water.

How it tastes

Cometeer offers the pods in light, medium, or dark roasts, or a mix of the three. I ordered light roast; I’m going to drink it black, and most of what I brew and enjoy at home is East African beans, as lightly roasted as I can find. So far I’ve tried the Yirgacheffe. It’s far and away the best K-cup I’ve ever had. It’s very good coffee. If I’d paid $4 for this cup in Kendall Square, I’d be perfectly happy with it. The flavor is bright and fresh. If brewed normally, I’d say it was roasted within the last week or two, and ground today.

The texture of the coffee is thin. There’s clearly no sediment, no fine grounds that made it through the filter. On a spectrum from instant coffee through French Press to Turkish, this is right there with the instant. Even my coffees made from cold brew extract have more body than this—perhaps more oil makes it through into those?

That’s hardly fatal: this is a very good cup of coffee, better than I can get in my indulgent tech workplace. The flavor is great. It’s just thin. If you add milk to your coffee, I don’t think you’ll even notice.

I’m interested to see how flavor evolves over the next month. The pods will stay in my freezer—it’ll take me much more than a month to got through 32 of them, given I’m mixing in other coffee sources.

Dark patterns

Cometeer only lets you sign up for a subscription right now: 32 pods every 1, 2, or 4 weeks. That’s way more than I need. I had to email them to get a link at which I could manage or cancel my subscription.

The right way to sell these is to sell them, and to offer a subscription for those who want it—even at a discount. Any business that lets you start a subscription over the web but requires interaction to cancel? That’s not someone I trust to do more business with.

Future uses

I don’t need 32 of these a month. Perhaps when offices reopen, something like that will appeal. I’d love to be able to bring two of these for a backpacking trip. I’d love to be able to keep a box of decaf in my chest freezer. I don’t drink decaf often, but it’s a pain to turn over a grinder to make decaf, and this would be a nice way to have some available.

This isn’t price-competitive with ordering single-origin Counter Choice or Blue Bottle coffee beans for delivery: those are $30.15 for 680 g = 44 cups of coffee, the way I brew it. That’s less than 70 ¢/cup for some of the best coffee you can get on Earth. Cometeer’s coming in at more than 180 ¢/cup. Yes, I had to buy and maintain a grinder and a kettle and an aeropress and filters… but given I have those sunk costs, this probably can’t be a regular home-use object for me.

Confusing ARC delays in Rust

Last month, I wrote a little about using a toy problem to explore Python’s async/await concurrency model. I’ve been meaning to learn Rust for ages, and this seemed a nice size to experiment with. I ported over that program to Rust—and the whole thing is up at Github for your enjoyment.

But I’ve run into an interesting problem: the version using the standard library’s ARC/Mutex concurrency is incredibly slow. A rewrite using the crossbeam library’s atomic data types is much faster—despite that they should be using the same primitives. Not only is the ARC/Mutex version slow, but it gets much slower over time—as if something’s leaking badly. I welcome insight to what I could be missing. After all, this is my very first Rust program; perhaps I’m using ARC in a very improper way.

Let’s look at some examples of how the standard ARC/Mutex code differs from crossbeam.


Initialization is pretty similar between them. The ARC/Mutex version imports some standard libraries, and sets up the struct that will be shared between the computer and inspector threads:

// ARC/Mutex version

use std::f64::consts::PI;
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::{Duration, SystemTime};

const TARGET: u32 = 10;
const GUESS: u32 = 100;

struct Leibniz {
    x: f64,
    d: f64,
    ticks: u32,
    tocks: u32,

And the crossbeam version does similarly—but can’t have a central struct:

// crossbeam version

extern crate crossbeam;

use crossbeam::{atomic::AtomicCell, thread};
use std::f64::consts::PI;
use std::thread as sthread;
use std::time::{Duration, SystemTime};

const TARGET: u32 = 10;
const GUESS: u32 = 2500000;


The ARC/Mutex computer is as simple as you could ask for:

// ARC/Mutex version

fn computer(state: Arc<Mutex<Leibniz>>) {
    loop {
        let mut s = state.lock().unwrap();
        for _ in 1..s.ticks {
            s.x += 1.0 / s.d;
            s.d += 2.0;
            s.x -= 1.0 / s.d;
            s.d += 2.0;
        s.tocks += 1;

while the crossbeam computer has to mess around with loads and stores more:

// crossbeam version

fn computer(
    xr: &AtomicCell<f64>,
    dr: &AtomicCell<f64>,
    ticks: &AtomicCell<u32>,
    tocks: &AtomicCell<u32>,
) {
    loop {
        let mut x = xr.load();
        let mut d = dr.load();
        let ticks = ticks.load();
        for _ in 1..ticks {
            x += 1.0 / d;
            d += 2.0;
            x -= 1.0 / d;
            d += 2.0;


The same is true for the inspector. The ARC/Mutex version is clean to write:

// ARC/Mutex version

fn inspector(state: Arc<Mutex<Leibniz>>) {
    let mut old_d = 1.0;
    let now = SystemTime::now();
    loop {
        let mut s = state.lock().unwrap();
        if s.tocks <= TARGET {
            s.ticks /= 2;
        } else if s.tocks > TARGET {
            s.ticks = s.ticks + s.ticks / 10;
        println!("{:?} {} {} {} {}", now.elapsed().unwrap(), s.ticks, s.tocks, s.d - old_d, PI - 4.0 * s.x);
        old_d = s.d;
        s.tocks = 0;

while the crossbeam version has a third of its lines on loads or stores:

// crossbeam version

fn inspector(
    xr: &AtomicCell<f64>,
    dr: &AtomicCell<f64>,
    ticksr: &AtomicCell<u32>,
    tocksr: &AtomicCell<u32>,
) {
    let mut old_d = 1.0;
    let mut now = SystemTime::now();
    loop {
        let x = xr.load();
        let d = dr.load();
        let tocks = tocksr.load();
        let ticks = ticksr.load();
        if tocks <= TARGET {
   / 2);
        } else if tocks > TARGET {
   + ticks / 10);
        println!("{:?} {} {} {} {}", now.elapsed().unwrap(), ticks, tocks, d - old_d, PI - 4.0 * x);;


The last bit of setup is different: ARC/Mutex requires us to clone the state ARC, and move one copy into a new thread:

// ARC/Mutex version

fn main() {
    println!("ARC std version");
    let state = Arc::new(Mutex::new(Leibniz {
        x: 0.0,
        d: 1.0,
        ticks: GUESS,
        tocks: 0,
    let state_i = state.clone();
    thread::spawn(move || computer(state));

while crossbeam has us use scoped threads, borrowing each of the atomic values read-only (though of course they can be stored):

// crossbeam version

fn main() {
    println!("Atomic crossbeam version");
    let x = AtomicCell::new(0.0);
    let d = AtomicCell::new(1.0);
    let ticks = AtomicCell::new(GUESS);
    let tocks = AtomicCell::new(0);

    thread::scope(|s| {
        s.spawn(|_| {
            computer(&x, &d, &ticks, &tocks);
        inspector(&x, &d, &ticks, &tocks);

Having to thread separate AtomicCellvalues through everywhere is a pain—and it’s not much relieved by putting them in a struct, because you still need extra load and stores everywhere you access them. I’m consistently worried that I’ll get the ordering between atomic values wrong and cause either a deadlock or a logic error—even though these are almost one-way channels (tocks is incremented by computer and reset to 0 by inspector).

I’ve adjusted everything I can find in the ARC/Mutex version, including the initial and target constants. I am pretty sure I’m not supposed to be taking a reference to the ARC, but a clone. I’d love to find a set of profiling tools that would let me see what the ARC/Mutex version is spending its time on. I’d love to have advice from a competent Rustacean on how this code could or should look. Thanks!