DynamoDB Hum
“DynamoDB is a fast, fully managed NoSQL database service that makes it simple and cost-effective to store and retrieve any amount of data, and serve any level of request traffic. All data items are stored on Solid State Drives (SSDs), and are replicated across 3 Availability Zones for high availability and durability.”
DynamoDB is interesting because of its ability to scale. Let us talk about the bits of DynamoDB relevant at scale.
Keys
Imagine our DynamoDB table is a giant hash. We call each key of that hash the hash key – clever, eh? When we query or get an item from DynamoDB we must know (or be able to deduce–more on this later) its exact hash key.
It is possible to configure DynamoDB such that there is one item–an item is a hash–per hash key. That is boring. We will not talk about that.
Instead imagine that our giant hash is full of arrays that are full of items. These arrays are indexed by each item’s range key and can be accessed using the Query API.
Consider a table containing Articles. We can use the UserId value of the article as the hash key and the UpdatedAt value as the range key. We can now use the Query API to efficiently retrieve a user’s articles for the last thirty days. DynamoDB goes to the hash key location, scans through the range keys (we can tell it which direction to look) and there we go.
Indexes
One table with two indexes means we need roughly triple the write capacity compared to the table alone. Solutions that involve external indexes require management of write capacities across multiple tables and possible data inconsistency–boo–but in the next few weeks AWS will offer Global Secondary Indexes which will hopefully render such schemes obsolete. Local Secondary Indexes–additional range key indexes–are available now.
What is an “additional range key index”?! Sorry! Remember our giant hash filled with arrays filled with hashes, those arrays ordered on a range key value selected from the hashes they contain? LSIs allow us to specify additional range keys so, to add to our above example, we have UserId as the hash key and UpdatedAt as the range key–we add an LSI so we can make a second range key on Name. Now we can also query Articles for a user’s articles that have a certain name or start with ‘z’ or whatever.
Global Secondary Indexes will allow us to add additional hash keys to index our tables. If we add Name as a GSI instead of an LSI then we can query all articles by name rather than just querying articles by name for a given user.
Partitioning
The write capacity of a table is divided evenly among a number of partitions. We do not know the number of partitions but it increases/decreases as capacity is added/removed. In order to utilize our total provisioned write capacity we must evenly spread writes across partitions to avoid hotspots because exceeding the capacity of any one partition will cause our table to be throttled. One approach is to choose random hash keys but this requires that we always know the exact (random) hash key of the item we want. To get around this a hash key index (external or GSI) is added. But! That index has the same hash key constraint–indexes are just other DynamoDB tables so the index hash key will suffer the same partitioning hotspot issues. Thus we have not really resolved the issue by choosing a random primary hash key but rather moved it elsewhere and doubled our required write capacity for the effort.
What to do now? There are two primary options. One is we can use something useful, like a UserId, as the primary hash key and add something to help randomize it. For example we could prepend a random digit 0-9 to each hash key. If we are storing a key for a specific item somewhere we can store it with the random digit and use that to get the item directly. If, on the other hand, we need to query for an item and we are not sure which of the ten possible hash keys it uses we have to query each of them (or better if we can figure that out) until we find it.
The other option is to spread writes for a given hash key out over time (or better if we can figure that out). Continuing with our example above let us imagine our app can import a user’s articles from WordPress in bulk. When that job runs for a prolific blogger we are suddenly beating the snot out of that UserId hash key. Throw those writes in a queue and work on them over time and that snot-beaten spot is not so hot.
Which approach to take (perhaps both!) and the implementation details will naturally depend on our data and usage patterns, but now we know enough to reason about the problem.
Patterns
DynamoDB’s constraints might be off-putting for a lesser data store, but DynamoDB’s speed, scalability, simplicity and cost make its constraints worth thinking about. There are interesting and useful patterns to be discovered.
For example let us go back to our Articles table example from above. We will use the UserId for the hash key again. But this time we are going to make a composite range key by prepending a SequenceId to a randomly-generated UuId. [Note that foreign keys–from Dynamo or other data stores–are an interesting choice to use here instead of random uuids long as they are unique within the primary hash key.] We take advantage of DynamoDB’s flexible schema and store the metadata item for a sequence–representing a Category–in position 0 and store the data items for a sequence–representing articles–in positions 1-n. We write the metadata item last to ensure consistency (potentially at the expense of some orphan items) and persist the total number of items written in it. We could also persist some sort of sequence index here, tags related to the sequence, etc. Once the metadata item is written we consider the sequence immutable. Now we can query a user’s categories by searching for all items that start with 0. From there we select a category whose articles we want; we know from the metadata item the sequence’s uuid and how many items are in it so we batch_get them. Thus if our metadata indicates our sequence contains three items we do a batch_get for hash_key[1.uuid, 2.uuid, 3.uuid] and DynamoDB efficiently retrieves the sequence for us. Yay. We could, of course, get them one at a time or in paginated batches instead. This covers a lot of use cases and does so without using an index. Naturally an appropriate solution to the partitioning problem above must be implemented as well.
Deleting
Deleting items from DynamoDB at scale is expensive. Avoid it if possible. Prefer to rotate tables instead–a new table for each month, for example–eventually expiring or moving old tables to cold storage.
Backups & Reporting
If we are running at scale we are probably going to be very unhappy trying to restore a DynamoDB table of any significant size from some other media. Rotating tables helps here by keeping things small but the only real way to quickly recover from, say, an accidentally dropped table, is a hot spare table. Also if we want to analyze the data on a table, as with reporting or other data mining activity, the recommended method is to use a hot spare table and do the analysis on it so as not to impact performance of the production table.
Scan API
The Scan API iterates over each item in a DynamoDB table. Just no.
Conclusion
Whew! There is plenty more to talk about but this should be enough to get us moving in the right direction. I will write a follow up article if there is interest. Have fun! Feel free to hit me up @ggfuzz.
UPDATE
I wrote a Ruby gem for this.