2017-04-29

Small update on datastructure benchmarks




I hadn't written a skiplist yet. So here's the same graph but with a randomized skiplist added in... Notice that it's pretty horrible anyway.

2017-04-20

Benchmark of all major dictionary structures

I've been writing basically every major datastructure, one at a time.
I wrote up heaps a little while ago: http://www.blog.computersarehard.net/2017/02/a-better-heap.html
I've now finished writing and benchmarking all the common dictionary datastructures.
Note that at every point in this graph the same amount of work is being done. At each point we put "test_size" random elements in to the datastructure, and then remove them. We do this 134217728/test_size times, and time the *total*. Thus we're always putting in and taking out 134217728 elements.

As a result, this graph is showing is how the size of a datastructure impacts it's performance. Note that the graph is logarithmic on the X axis, so it's not completely dominated by the larger tests.


First, lets talk about what each of these algorithms *is*. As a note all of these algorithms resize automatically, both up and down.

  • AVL: This is a relatively standard AVL tree with external allocation. I was unable to find an implementation that was correct and balanced, so I rederived some details myself. 
    • O(log(n) all operations
  • RedBlack: This is a highly optimized red-black tree implementation (much more time was spent on it than AVL, due to AVL outperforming it in my testing perviously, I assumed I must have made a mistake).
    • O(log(n)) all operations
  • BTree: This is just a standard btree implementation, tuned with a fairly efficient arity
    • O(log(n)) all operations
  • Hashtable: This is a standard open chaining hashtable, using dynamically sized (doubling) arrays for the chaining
    • O(N) worst case, O(log(N)) amortized
  • OCHashTable: This is a standard open chaining hashtable, using an externally allocated linked-list for the chaining
    • O(N) worst case, O(log(N)) amortized 
  • AVLHashTable: This is a standard open chaining hashtable, using an externally allocated AVL tree for open chaining
    • O(N) worst case, O(log(N)) amortized
  • BoundedHashTable: This is a giant pile of tricks. It uses an array like datastructure that can be zeroed in constant time. An AVL tree for open chaining. Rehashing on resize is distributed across operations so it rehashes a couple of elements for every op, using a link-list to find each element.
    • O(Log(N)) all operations

Algorithms left out
  • BtreeHashTable: This performed so poorly I pulled it out of the testing. The reason is obvious, while btree is very fast, it's not good for small datasets, and the chains in a hashtable are always small

Surprising results:

You may notice that NONE of these algorithms are even *close* to linear in practice. If every operation is amortized to constant time, as in our hashtable algorithms, the line should be completely *flat*. No more work should be done, just because the datastructure contains more data. This is even more true for the bounded-hashtable, where no operation is *ever* linear, the only reason it's log and not constant even on a per-operation basis is the AVL tree used for chaining.

I spent a while trying to find non-linearities in my testing methodology but came up with nothing. Remember, the X-axis is logarithmic Isn't that odd? If that's throwing you off, here's what it looks like graphed linearly (My data is logarithmic in nature, so the graph is pretty ugly). Whatever that is... it's not linear.



So, what's going on? My best guess is this is the computer's fault. Caching layers, memory management, etc. memmap() probably takes longer and longer to get us memory for malloc for example. I've yet to get detailed enough information to confirm this theory though.

Conclusion
Well... aside from the nonlinearity described above. OCHashtable is the clear overall winner for average runtime at any scale, no big surprise there. BTree is the clear winner for bounded algorithms of large size. AVL and RedBlack are about equivelent for small size... but given in my previous testing AVL came out a little faster, lookups should theoretically be a little faster, the implementation tested here is less optimized than red-black, and an order of magnitude simpler to code, AVL clearly beats RedBlack (as is now known generally).

This is pretty much what we would expect. I had high-hopes for BoundedHashTable, as *theoretically* it is very fast... but the constant factors seem to blow it out of the water, and it still shows up as very much non-linear. This algorithm is unable to resize arrays (as realloc zeros, which is linear), this means constantly allocating new differently sized arrays. I suspect this along with the constant factors due to algorithmic complexity is probably the cause of poor performance.

As always, full source is available here: https://github.com/multilinear/datastructures_C--11

2017-02-14

Sort tests

Comparing sorting algorithms isn't terribly original, but I didn't think comparing AVL trees to RB trees was either, so I thought I'd do it anyway and see what the real results were.

Quick refresher. Most computer scientists know big O notation, but we tend to forget big Omega and big Theta. Big O is the upper bound, big Omega is the analogous lower bound and big Theta is used when the two are the same. Got that? I often hear people state a "best case" as "big O of", but I want to promulgate correct usage.

In testing, quicksort was of course the fastest, mergesort next and heapsort last. Though I wrote selection and bubble as they have their own uses, I didn't even consider \Omega(N^2) algorithms for speed testing. Just as a reminder before we look at the results here's the boundaries and properties for each:

  • Quick Sort
    • O(N^2)
    • \Omega(Nlog(N))
    • In place, not stable
  • Merge Sort
    • \Theta(Nlog(N))
    • Not in place, stable
  • Heap Sort 
    • \Theta(Nlog(N))
    • In place, not stable



But, lets talk some real numbers. I did two tests, one with 10000000 element sort run 100 times, and one with 100 element sort run 10000000 times. I'll call the first "large" sorts, and the second "small". I did a little more testing to confirm that the results are relatively stable within those intuitive categories.

  • Quick Sort:
    • Fastest in both cases 
  • Merge Sort
    • Large test: 16% slower
    • Small test: 9% slower
  •  Heap sort
    • Large test: 122% slower
    • Small test: 19% slower

Okay, so these were basically the results we all expected right? There are a couple of interesting details though.

First, because it just jumps out at you, what the heck is with heapsort? It certainly does more operations than the other two, but that wouldn't account for the difference between small and large. My guess is that as the heap spreads out basically every lookup in the array is a cache-miss, this is what bheap was attempting to improve for a normal heap algorithm, but the constant factors came out even worse.

Now, lets talk about the two algorithms who's speed don't immediatly knock them out of the running. Their are two commonly cited reasons for using Quick Sort over Merge Sort. The first is that it's in place... I did some further testing and on my machine (a modern linux distro), and with a clean heap, doing the allocation for merge sort only adds another 1% overhead for both small and large cases. Admittedly since we alloc and free the same size over and over again we're using malloc like a slab allocator, but then that's also the point... allocation speed can be worked around. The second reason is that quicksort has slightly better constant factors. Here I've shown that slightly means ~9-16%. If moves were expensive this might go up a little, but if moves are that expensive you probably shouldn't be directly sorting the data anyway.

Now consider that if you use quicksort your sort will sometimes take N^2 time. That's the sort of thing causes stutters every few seconds or minutes in a videogame, a network stack, etc. 10%-15% is below what's often considered "user noticeable" speed difference (that line usually being drawn around 20%), but they will almost certainly notice the stutter when it takes 100% longer one time.

Conclusion:
Following the philosophy I keep pushing, Merge Sort is probably a better default sort algorithm to use than Quick Sort. Using modern mallocs like tcmalloc allocation time becomes less relevent even with a "dirty" heap. In highly optimized applications dynamic allocation itself is often avoided (since it can cause occasional delays as well), in such cases worst-case is almost always the most critical factor, and additionally it's worth the effort to set the ram aside so being "in-place" isn't that critical.

Eventually I'd really like to microbenchmark some of the algorithms I've been testing so as to actually measure the near-worst-case operation. For now all I have is practical experience and theoretical bounds with which to demonstrate it to others.

Further work:
I'm currently playing with hashtables as well, continuing the tree comparison testing. Of course the hashtable is much faster than my best tree, but I want to pursue some solutions to the hashtable worst-case problems and see how those fair as well.

2016-10-15

BHeap algorithm

I started a personal cruisade some time ago against selecting algorithms based on average time while ignoring worst case runtimes. I've posted about this here before:

Several years ago I was toying with Heaps. The normal model for a heap is that the average run-time is O(log(n)) per operation, and the worst-case is the same. While theoretically true, in practice you rarely know the size of the data you are working with before-hand, and if you are ever wrong you have to allocate more space. Since heaps are by default stored flat in an array this means *copying* the entire heap in to thhe new larger array. This operatin is in fact O(n). Thus, in practice most uses of Heaps are actually worst-case O(n).

Well... that's kind of horrible. So, I tried implementing one as a literal in-memory tree structure instead of an array. I called this a "bounded" heap (since it has a stricter bound). This gets a true worst-case O(log(n)) (assuming allocation time for new nodes is bounded). Unfortunately the performance is abysmal. We're talking 5 or 6 times worse average case, making it a pretty hard sell to convince anyone to use such an algorithm.

So, I got an idea. What if we use the ideas of a Btree in a Heap. That is, allocate chunks of memory and store pieces of the heap in them. A the time I got an outline for an algorithm I call a Bheap (creatively), but I never got around to implementing it.

I finally got it working and benchmarked it recently. Here's an outline for the Bheap algorithm. If you want full details you can just read my implementation, linked at the bottom:

Bheap algorithm:

Lets define a "Node". Each node contains an array. The first half of the array is a heap of data, exactly as normal. The second half is still layed out like a heap, but it's an indirect to other "Nodes", that is heaps. So, in principle it's just one big heap, but broken up in to chunks.

But, there's one catch. If we did this naively we would fill the first node with a bunc hof elements, then we'd allocate it's first child node and put one element there, then we'd allocate the second and put one element there. That's a total waste of memory (wasting approximately 1/arity the memory it uses). So, we modify things to fill the allocated node before it creates a new one... Heaps don't depend on much more than the heap ordering, so nothing is significantly changed.

There's only one more catch. Once we fill the last node we can allocate at a given level, we need to start filling the next level. As an optimization instead off walking down from the root we simply start making a our next child below the current tail. This is an idea I took from my first bounded heap algorithm. To make this work we fill nodes going left, then going right. There's some intracacies to making that work in practice, see the code for details.

Predicted Performance:

This algorithm has 2 neat advantages over a normal heap
1) It does exactly what I planned, allocating a small chunk of data at once, and never having to do a linear time operation... yet it's quite memory efficient. It uses ~2x the memory of a normal heap, due to the pointer indirects, and wastes at worst 1 nodes worth of space... not bad.
2) It should get better locality than a normal heap. Once downside of a normal heap is that a nodes children are located at 2i+1 and 2i+2. That means that after the first few elements all operations are non-local as far as caching goes. This algorithm keeps resetting that every time we go to a new node, so it should peform better cache-wise.

Actual Performance:

Here comes the charts and graphs... but long story short. On my benchmark, with the best possible node-size the Bheap is ~30% slower than a normal heap, and ~4 times faster than the bounded heap. This is actually pretty cool. We've brought the *worst* case down to O(log(n)) with only a 30% loss in practical average case runtime.

I graphed the HeapTime for a whole lot of points just to give an idea of what the variance looks like (lazy mans statistics, I know), but the above chart gives a pretty good clue of where the overriding factors are. In particular it looks like past ~20 elements or so there's no more gains for larger node sizes and constant factors related to the extra BHeap logic become dominant.

I've left out BoundedHeap data because it's just not interesting, it varies from 13 to 15 seconds, that's all there is to know.
So, final roundup looks like:
BoundedHeap: ~14.0s
Bheap: 3.2s
Heap: 2.4s

Analyzing this a bit more. It seems that the caching gains aren't worth all the extra conditionals necessary for the extra logic. That's not too surprising, we still miss cache pretty frequently, just not quite as often.

Also fascinating even at a node-size of 2 this algorithm is outperforming the bounded-heap by ~3.2... That seems wrong. After digging in to the code though I realized that the bounded-heap implementation moves around NODES, while the BHeap implentation moves around DATA. These tests were done using an "int" as the data, on a 64 bit X86 arch under linus... e.g. 32 bit ints and 64 bit pointers. Unsurprisingly when you think about it, swapping two 4 byte ints is faster than swapping the 2 64 bit  child pointers and 1 64 bit parent pointer around. That's an interesting lesson in my mind.

If anyone is aware of an algorithm like my Bheap that predates my implementation I'd love to know about it. Given the average runtime is still worse than a normal heap I don't expect anyone to adopt this algorithm anytime soon... but being only 30% worse, it probably would be worth it for a lot of applications. Heaps are usually used for maintaining a "small" list of items.

Note that this work is completely obviated by careful memory management. The dumb solution is to leave space after your array and allocate a page there if needed, but this is quite annoying to implement and gets in the way. Approach #2 though is more practical. A modern malloc will allocate entire pages for large memory requests, thus realloc and the kernel can intelligently re-use those pages, even if they must move the allocation. This means that in practice the Bheap algorithm is likely never worth using in the presence of a paging system, large virtual memory space, and good libraries.

Final notes:
- My Heap implementation uses a standard doubling algorithm. So, if you store pointers in your heap (as is common) the worst-case memory overhead is basically identical to the Bheap.
- I know skiplists are sometimes used in place of heaps in speed-critical applications. I've yet to implement one, or analyze it's theoretical or real-world runtimes. I understand most operations are log(n) bounded, but I'm unclear on the advantages over a heap.
- One other approach to get similar theoretical bounds is to use a normal heap implementation on top of an array-list implementation. Comparing such an approach seems like a good direction for continuing research.
- My implementations and test harness can be found in my git repo here:

2015-03-13

Monitoring: update

Prometheus appears to actually exist now, and is coming along pretty well.
https://github.com/prometheus/prometheus

Glancing through it, it appears to hit the bulk of my points. It's still lacking a permanent DB for storage, so it's probably not deployable for most folks yet, but in general it looks like they are doing things basically right In particular.

So, there's no answer out there that I think solves the poblem *yet*, but at least someone is getting close. Check out the query language, particularly operators:
http://prometheus.io/docs/querying/basics/

A friend of mine also works at splunk, and they showed me some of splunk's features. As a backing store it appears to fit much of what I was describing as well. It has reasonable group-by semantics for example. Fundamentally splunk is basically a timeseries DB, it just happens to most often be used like ES is for processing logs.

So, with any luck my old monitoring posts will soon become defunct. Here's hoping!

2015-01-05

Original AVL paper

This came up the other day, it wasn't on my blog and I wanted to find a link to it. So here it is:

An algorithm for the organization of Information
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCAQFjAA&url=http%3A%2F%2Fmonet.skku.ac.kr%2Fcourse_materials%2Fundergraduate%2Fal%2Flecture%2F2006%2Favl.pdf&ei=aK-qVKbcL8udNt_Jg4gE&usg=AFQjCNHp14CdgTfIBGOoBdk4OX17ryVCjw&bvm=bv.82001339,d.eXY&cad=rja

This is an amazing read because you suddenly realize just how *differently* we thought about algoriithms and programming back then. This paper doesn't even touch on the API, it's all about the layout of the tree in memory. When's the last time you read an algorithm description that didn't mention the API?

Anyway, it's a neat read. Give it a try!

2014-08-22

Monitoring, post3, tools

Alright, it's time to write this. I had intended to write this much, much sooner, but then the "Group By" realization hit me, and I got stick for a while trying to prove out some features in the system I'm actually using.

Our goal is a system that gathers whitebox and blackbox data from applications, and whitebox data from machines. Lets us easily query, view, graph, and alert on that data with the feature set from my previous post: .

There are also some soft requirements in practice for really using a DB:
  • Can run at some scale
  • We either need someone to host it, or we need to run it ourselves
  • Tools tend to only go so far, and it's nice not to have to throw it all away if you hit the edge. This makes free software (FSF definition) nice
  • It's nice to use other pre-existing tools with it without much modification

Nagios et al.

Nagios is the industry standard solution. Nagios is what many many systems are trying to emulate, and as a result it's flaws are endemic to the world of monitoring systems.

First, let me say that I have not used Nagios, but I have used some of it's competitors
 
The main problem is that fundamentally these systems mostly are not meant to do the type of things I discussed in my previous post. It's meant mostly to alert on directly monitored values (e.g. is memory usage too high?) They then give you some small finite set of "aggregations" so you can see, for example, how much memory *all* your machines are using together. This works fine until it doesn't. Values like that tend to be scale dependant and take constant tweaking.

A popular meme right now is "automated thresholding", that's when your system figures out what "normal" is, and alerts when a value is some statistically significant distance outside that "normal". This is an attempt to solve the problem of scale independence. But it has 2 problems. The first is that sometimes your "normal" drifts, and that's indicative of e.g. your users slowly outgrowing how far your system can scale. After a year of slow drift "normal" could be 5% of queries being dropped... that's not okay.

Note that I said 5% of queries being dropped. Ratios are a much simpler solution to this problem. In practice this does not entirely solve scale independence. If you don't already know why, Google for the "law of large numbers". That said, while it's not perfect, it gets you 90% of the way there, and I would argue that the fallability and complexity of AI-type approaches are not worth the tiny bit of scale independence it buys you.

On top of this it's designed to be "easy to use", which means everything is based on clicky-button interfaces, no backup or versioning for your configs, no idempotent setup, etc. It's a nightmare from the perspective of a reliability engineer as soon as you view that system as something you have to manage and not just use, despite that you're paying someone else a ton of money to run it for you.
Basically, don't believe a monitoring tool will ever solve a problem for you, or allow you to think less about your system. I'm just going to leave it there, and let you generalize out to all of the other systems in this same family.

Collection -> DB -> alerting solutions

Okay, so if you get frustrated with the tools in the above model and start poking around what you quickly find is that folks in the know are using systems with 3 seperate components. Data collection, a timeseries database, and an alerting system that queries the DB.

This has some nice advantages. You can use lots of different collection engines for collecting different types of data (e.g. statsd for application level, various services for whitebox probing, etc.) Yet all the data ends up in the same DB, making building UIs and integrating data across those collection engines a breeze.

There is one notable downside, which is that your alerting is fundamentally polling based. It has to do expensive queries against the DB every N seconds so it can alert you if something goes bad. If you're doing polling you can usually assume you're doing something wrong. The *right* answer would let alerting trigger at the collection level. BUT we still want to incorporate old data in some cases, so it also needs to go back to the DB to get data, or keep a local store, or something.

There is one system out there that strives to do this a better way called Prometheus. Unfortunately, it's not ready for production yet, but keep an eye on it.

Collection

This part of the equation is boring, honestly. We need something to gather stats from machines and applications and feed those stats in to the database. There are a hundred ways to do this that are fine. As long as it doesn't block the application, we can get application and machine level data, and the data gets to it's destination, it'll work.

DBs

Before we dive too deep we need to pair down the field. So, looking at my earlier requirements list it's pretty clear that we need a rich query interface with some understanding of timeseries. Based on that I'm tossing out options like Postgres, MySQL, etc.

There are also time-series plugins, modes, etc. for some databases, but all of the databases like this that I found store time-series in the wrong way. To do the calculations I've discussed earlier we need to be able to query for a timeseries, and then compute on a subset of that timeseries. The "plugin" approach tends to store a *set* of timeseries as something you can query for, which really doesn't help us much.

Here's a nice list of open source timeseries databases.
https://en.wikipedia.org/wiki/Time_series_database 

Here's the ones I've looked at
Now, I'm certain that I'm going to get something wrong in this post. There's simply too many details. My goal is to let others share some of the realizations and research I've had and save a little time. So, please bear with me, and if you find mistakes drop a comment and I'll try and fix it.

Druid:
Looks like a promising system, but everyone says it's a total bear to actually run. It's based on zookeeper, and it uses a mysql instance for it's metadata. That's already some interesting requirements, but not terrible. Then you start looking at it's pieces, it has a controller node, a broker node, a historical node. A minimal system is just very complex... too much for me.

Graphite:
Seems to be the established "common-sense" answer among the options. It's open source. It has a relatively rich query language. As of version 0.10.0 it has "map" and "reduce" which give a generalized Group By semantic like I talked about in previous posts. It's not terrible to run yourself, though it does have some scaling limitations. There are hosted options where they've already worked out the scaling issues for the most part - sadly hostedgraphite doesn't support 0.10.0 yet, but they are working on it (I've been talking with them :D).

The biggest win of graphite though is that it's just got tons of community support. Everything integrates with it, all the collection tools, and many of the front-end alerting tools out there.

InfluxDB:
InfluxDB is the shiny new guy in town. I was really excited to read through their website. It's open source, the developers are also doing a hosted option. The language is very rich, and unlike graphite features like "group by" were built in from the beginning, so it's supported properly. Unfortunately, it's rather new, so still a bit too new for my blood. If you want the new hotness though, give it a try.

KairosDB:
From what I understand this is basically a re-write of OpenTSDB. Like OpenTSDB it runs on hbase (translation: it's impossible to administer unless you're already a hadoop guru). Unlike OpenTSDB it can also run Cassandra Cassandra is the open-source bigtable written in Java, except that unlike bigtable it's also a storage-stack. The problem is, it's not that reliable. They still haven't hammered out a lot of bugs. Is it usable? Yes. Do you want to deal with it if you don't have to? No. It also has nice query semantics supporting most of the features you might want.
This might be a decent option if you can find a good hosting provider. Non-hosted though, it's probably a no-go for a small organization. For a big organization it might be just the ticket.

OpenTSDB:
See Kairos. OpenTSDB is a true standard, it's all open source, all that great stuff. But, like kairos it's nasty to actually run it. Unlike kairos it only runs on hbase, that is, the hadoop stack. If you already run a hadoop stack that's all well and good, no big deal. If you don't it's a heck of an understaking just for your monitoring database. That said, I understand that it scales quite well. Again it has support for rich queries and all the shinies.
 Informix:
From IBM, it's closed source. It's been around a pretty long time, so it's really optimized for a different set of use-cases from what I gather. From my perspective I'm putting a lot of time and energy into a system, and closed-source scares me because I can't jump ship if I want to. Systems like graphite have standard interfaces that are supported by lots of systems. Informix is like the exact inverse of this, being 100% proprietary and no-one wants to go within a 100 yards of something owned by IBM (for good reason).

Of these, I ended up picking graphite. I state that here to explain the next section

Alerting

This part is harder. After scouring the fields for systems that alert based on data in graphite. They are all seriously flawed.

The biggest flaw is that they all use configuration backed by databases. This means that my data about what to alert on, which is fundamentally configuration, is instead live-data in production. If I lose that database in production I'm going to be stuck rebuilding all of my monitoring from scratch. If I make a mistake and bump the wrong button, or someone changes something and we decide it was wrong, we have no rollback, no tracking. Any features related to versioning have to be built in. Also, I can't do code-reviews and such on changes, review them branch them, and do everything else source-control lets me do.

For me, this shot down every single system I could find. My conclusion? Build one myself.