# known incorrect lightweight lock / critical section ci = sys.getcheckinterval() sys.setcheckinterval(0) try: # do stuff finally: sys.setcheckinterval(ci)
A cursory reading of the interactive Python help below would suggest that the inside the try block, no other Python code should be able to execute, thus offering a lightweight way of making race conditions impossible.
Help on built-in function setcheckinterval in module sys: setcheckinterval(...) setcheckinterval(n) Tell the Python interpreter to check for asynchronous events every n instructions. This also affects how often thread switches occur.According to the basic docs, calling sys.setcheckinterval(0) might prevent the GIL from being released, and offer us a lightweight critical section. It doesn't. Running the following code...
import sys import threading counter = 0 def foo(): global counter ci = sys.getcheckinterval() sys.setcheckinterval(0) try: for i in xrange(10000): counter += 1 finally: sys.setcheckinterval(ci) threads = [threading.Thread(target=foo) for i in xrange(10)] for i in threads: i.setDaemon(1) i.start() while threads: threads.pop().join() assert counter == 100000, counterI get AssertionErrors raised with values typically in the range of 30k-60k. Why? In the case of ints and other immutable objects, counter += 1 is really a shorthand form for counter = counter + 1. When threads switch, there is enough time to read the same counter value from multiple threads (40-70% of the time here). But why are there thread swaps in the first place? The full sys.setcheckinterval documentation tells the story:
sys.setcheckinterval(interval)Okay. That doesn't work because our assumptions about sys.setcheckinterval() on boundary conditions were wrong. But maybe we can set the check interval to be high enough so the threads never swap?
Set the interpreter’s "check interval". This integer value determines how often the interpreter checks for periodic things such as thread switches and signal handlers. The default is 100, meaning the check is performed every 100 Python virtual instructions. Setting it to a larger value may increase performance for programs using threads. Setting it to a value <= 0 checks every virtual instruction, maximizing responsiveness as well as overhead.
import os import sys import threading counter = 0 each = 10000 data = os.urandom(64).encode('zlib') oci = sys.getcheckinterval() def foo(): global counter ci = sys.getcheckinterval() # using sys.maxint fails on some 64 bit versions sys.setcheckinterval(2**31-1) try: for i in xrange(each): counter += (1, data.decode('zlib')) finally: sys.setcheckinterval(ci) threads = [threading.Thread(target=foo) for i in xrange(10)] for i in threads: i.setDaemon(1) i.start() while threads: threads.pop().join() assert counter == 10 * each and oci == sys.getcheckinterval(), \ (counter, 10*each, sys.getcheckinterval())
When running this, I get assertion errors showing the counter typically in the range 10050 to 10060, and showing that the check interval is still 2**31-1. That means that our new counter += (1, data.decode('zlib')) line, which can be expanded to counter = counter + (1, data.decode('zlib')) reads the same counter value almost 90% of the time. There are more thread swaps with the huge check interval than with a check interval that says it will check and swap at every opcode!
Obviously, the magic line is counter += (1, data.encode('zlib')). The built-in zlib libraries have an interesting feature; whenever it compresses or decompresses data, it releases the GIL. This is useful in the case of threaded programming, as it allows other threads to execute without possibly blocking for a very long time - even if the check interval has not passed (which we were trying to prevent with our sys.setcheckinterval() calls). If you are using zlib (via gzip, etc.), this can allow you to use multiple processors for compression/decompression, but it can also result in race conditions, as I've shown.
In the standard library, there are a handful of C extension modules that do similar "release the GIL to let other threads run", which include (but are not limited to) some of my favorites: bz2, select, socket, and time. Not all of the operations in those libraries will force a thread switch like zlib compression, but you also can't really rely on any module to not swap threads on you.
In Python 3.x, sys.getcheckinterval()/sys.setcheckinterval() have been deprecated in favor of sys.getswitchinterval()/sys.setswitchinterval(), the latter of which takes the desired number of seconds between switch times. Don't let this fool you, it has the exact same issues as the just discussed Python 2.3+ sys.getcheckinterval()/sys.setcheckinterval() calls.
What are some lessons we can learn here?
- Always check before using anyone's quick hacks.
- If you want to make your system more or less responsive, use sys.setcheckinterval().
- If you want to execute code atomically, use a real lock or semaphore available in the thread or threading modules.
If you want to see more posts like this, you can buy my book, Redis in Action from Manning Publications today!