Because of it's convenience, we've licensed GPC and are using it via the Polygon library in Python (GPC is free for non-commercial use, but since this is for work, we had to license it; Polygon is LGPL). In particular, the Python library has a very simple method for generating tiles of the form we need: Polygon.Util.tile(). Sadly, it is not terribly efficient. What do I mean?
Say that you wanted to pull one tile from a larger polygon region, using the Polygon library, you would just intersect your desired region with the larger polygon. That's exactly what you want, and for GPC, is about as optimal as you can get (there are other libraries that offer computationally more efficient polygon intersections, but I've not found any that are as easy to use as GPC). But what if you wanted tiles? The algorithm used by Polygon.Util.tile() generates all of the tiles that you want, and performs intersections for each and every one of them. I know what you are thinking: if the individual operation is optimal, why wouldn't applying the same operation over the larger space be optimal? Repeated work.
Say we wanted to pull tiles 6a, 6b, and 6c out of the polygon below:
If we were to perform the intersections directly, then we have to trim to the right of column 5 three times, to the left of column 7 three times, and 6 partitions of column 6. But what if we first performed a simple "trim to the right of column 5" followed by "trim to the left of column 7"? Well, we would add 4 vertices to the polygon (which we would have needed to add anyways), but by trimming to the left first, we remove 8 vertexes that we never operate on again, as well as another vertex to the right. Over the sequence of operations, we reduce the number of vertexes we need to compute on, and aren't re-performing the same line intersections over and over.
In this simple example, we don't really gain much by partitioning our space, because the shape is so simple. But by using a very simple algorithm that counts the number of points in each row and column to determine a good row/column to cut the space, we've seen huge returns in our programming investment. In our case, relatively small polygons of 20-50k points have had their processing time drop from 30-45 seconds to 2-3 seconds. Our moderately sized polygons of 150-300k points have gone from 15-25 minutes of computation time down to 15-25 seconds. The killer polygon for us had just over a million points. It had been running for over 6 hours before I rewrote the algorithm using a simple grid-based BSP algorithm. We killed the process, and re-ran everything using the updated BSP version. The entire process completed in under 30 minutes.
I have emailed the author of the Polygon library, so hopefully everyone will have a faster tile for free. Those of you who can't wait, feel free to get the code from the gist here: http://gist.github.com/560298
ETA: added a link to the Binary Space Partitioning page at wikipedia.
ETA: added a link to the Binary Space Partitioning page at wikipedia.
No comments:
Post a Comment