Sniffen Packets

With a name like Sniffen, it's got to smell good.

A demonstration of asyncio in Python 3.6

This is an experiment in cooperating corountines to converge on timing behavior. This is my first program using coroutines in Python; be kind.

The idea is to have a process doing some hard computational work, but about which we want regular progress updates. So we write the computational process as the usual Leibniz process for approximating Pi,

\[ 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \cdots = \frac{\pi}{4}, \]

and then write a separate coroutine to watch it and print out some running statistics. Because this is a tiny demo, they’ll communicate through global1 variables. This “inspector” coroutine can wake up once per second to print out current progress. But since asyncio uses cooperative multitasking, we have a hard question: how often should the “computer” process pause?

We could have the computer pause every time through the loop, but that removes a bunch of performance—it means you can’t have a tight numeric loop. It’s much better to plan a few hundred thousand iterations, do those, and then check to see if there’s other work. If we plan a whole second’s worth of work, we might come in just under the time for the inspector to run, then plan another second of work—so the inspector might end up running only every two seconds or worse.

Instead, we should play to pause the computer about ten times every second. To do that, we build a little controller: it plans to run through ticks iterations of the tight computation loop. Every time it does so, it pauses for other work, and increments tocks. When it’s been a whole second, the inspector can compare tocks to target and plan how many ticks to run next time.

I’m told that this gets easier in Python 3.7, but so far this does seem to work. The Pythonista environment is a little weird—one long-running backend interpreter—so closing the event loop can get you in trouble.

This prints one line per second, more or less. The last value on each line is an approximation of pi. It converges pretty quickly! The three values before that show the way the coroutines cooperate to converge on timing, which is the real point of what I’m exploring here. They are: 1) ticks: how many times did the inner loop of the computation process execute per yield? 2) tocks: how many times did the inner loop of the computation process yield per second? 3) d: how big have the parameters of the Leibniz process gotten? You can see performance collapse when Python switches math backends. I was worried it would get near the limits of 32-bit integers, but we’re nowhere close.

On my iPad, the system sawtooths from 16 to 10 tocks per line printed, and the lines continue to come at about 1 Hz. If I set the target to 1, of course, the lines get printed at < 0.5 Hz.

I’m not quite sure what’s going on in a few parts of this:

  • What would change if I made computer an async def and not a coroutine, and then used await asyncio.sleep(0) or similar instead of yield? I tried it and saw no performance difference. But what’s the change in semantics between asyncs and coroutines?

  • What’s a reasonable way to kill off threads after an exception has interrupted your event loop? Everything I’ve come up with leaves a thread that’s been killed by signal (KeyboardInterrupt) with its exception unread. I’ve tried canceling them, then scheduling a 0.2s pause. I’ve tried set_done(). I’ve tried closing the whole event loop and making a new one. All of those produce an exception. I’ve even tried canceling them, then run_until_complete each of them—but that runs forever.

Anyway, the coroutine paradigm is beautiful and I look forward to using it more. That last bit about killing threads seems unique to the Pythonista environment; in most places that’ll either interrupt the program, or you’ll intend to resume the event loop & may have even handled the exception inside it.

import asyncio

limit = 10 ** -6

# starting conditions for Leibniz's approximation of pi
x = 0
d = 1

# a starting guess at how many runs of the computer() inner loop
# will work out to `target` yields per second
ticks=100000

# how many yields actually happened?
tocks=0

# how many yields should computer() do per run of inspector()?
# that is, per second?
target=10


async def inspector():
	global ticks,tocks
	while True:
		await asyncio.sleep(1)
		if tocks<=target:
			ticks = ticks / 2
		elif tocks==target:
			pass
		else:
			ticks = int(ticks * 1.1)
		print(ticks,tocks,d,4*x)
		tocks=0

## TODO: set a target number of digits, and when that's stable, exit cleanly.

@asyncio.coroutine
def computer():
	global d,x,ticks,tocks
	clock=0
	while True:
		x += 1/d
		d += 2
		x -= 1/d
		d += 2
		if clock > ticks:
			tocks += 1
			clock=0
			yield
		else:
			clock+=1

async def cleanup():
	await asyncio.sleep(0.2)

# This is an excessive amount of work on cleanup.  It's a mix of an attempt to be
# careful, cancelling exactly those tasks that need to go---this didn't
# work---and a simple process of making a new event loop and closing off
# the old one.

# I'd love to understand more about where these "task exception was never
# retrieved" errors come from, and how to run the task long enough to process
# the exception.

async def main():
	try:
		c=computer()
		i=inspector()
		await asyncio.gather(c,i)
	finally:
		#c.cancel()
		#i.cancel()
		await cleanup()

asyncio.set_event_loop(asyncio.new_event_loop())
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()

Incidentally, this post was written on an iPad: the program was written and tested in Pythonista, the post was edited with Textastic and manged through the git client Working Copy, and the server and static site generator were manipulated with Prompt.


  1. If this offends you, you may refer to them as process-local variables.↩︎

tech