Duct-Taped Search and Sync with ElasticSearch, MariaDB and beanstalkd

Duct-Taped Search and Sync with ElasticSearch, MariaDB and beanstalkd

I recently started working on a new project (based on a LEMP [Linux, nginx, MySQL, PHP] stack), which has “Search” as one key feature. Search as in “Search over user generated input for text and / or location, with an arbitrary set of filters applied”. And here is how I chose to build it.

I prefer the LEMP stack, because I’m quite quick and comfortable in this environment. This project wasn’t meant to challenge my comfort or bring me new programming skills or dip my toes in the waters of other languages: that will happen soon enough. This was meant to be quick.

So I started with a database in which the content lives. This is straightforward and the easiest option. Everything is covered, everything has been taken care of.

However, there is an issue: search in MySQL or MariaDB can be quite cumbersome, especially when using text. Until recently, you could only apply fulltext search to MyISAM tables, which are slower than the InnoDB engine. My experience with this was always a rather mixed one. Still, this can be over come. On the other hand however, I estimated a lot of people to create content at the same time, while “search” with all the nice look-ahead and predictive search features will put a lot of load on the database in terms of read. So it would be nice to reduce the load of the database at some point, and usually you can’t reduce “write” load.

The next thing I want to do ist geo search. I have a certain point in the world and I want to know every entry that is in a radius of n km around this. This is another challenge, there are scripts and MySQL / MariaDB have gotten better in this (Postgres offers a lot more flexibility from what I read), but still I would have to put math in the database. I never was good in math – I regret that at times – so I had to find a solution that makes my life easier.

Luckily for me, other people sat down and built something, that is exactly meant to do this one thing: search. Filtered, geospatial, flexible full text search.

One of these things is ElasticSearch. Like Solr, it is built on the Lucene Engine and it is a document-based indexed storage. This means, that basically you have the flexibility of a NoSQL database and a lot of functions on top. And it’s built for speed. And if one server is not enough, you can horizontally scale ElasticSearch and use as many nodes as you require.

Unfortunately, there is a trade-off: things that are optimized for reading speed are usually slow in write. This is also the case with ElasticSearch. So what I thought of to work around this is storing the content in my database, and then sync it to ElasticSearch. That sounds good in theory, but what about real life? I have only limited resources (especially time) and … well, I’m a bit lazy.

One approach is real time synchronisation. There are realtime replication solutions out there. They are not hard to use (probably), and they’re proven. But I wanted a more independent solution, and also needed only uni-directional replication. So I decided to use a message queue to keep my ElasticSearch in Sync.

This has a couple of advantages: I have my database only loosely coupled to my index. This means when I lose the database due to heavy load or server failure, I will still be able to search. On the other hand, I can have more writing operations on my database without having to worry about read performance. When I decide that ElasticSearch is not the thing I want to use anymore or when I want to change my storage from a SQL database to something more moustache-and-skinny-jeans-y I just exchange the storage, and the rest of the application will still be working as before.

For this task I decided to go with Beanstalkd. The reason why I chose Beanstalkd as my messaging system was arbitrary: I have been working with this before in another project, and there is a PHP SDK around for it (as is for ElasticSearch, by the way). But no matter what you choose, all the messaging systems work generally in the same way: you have a kind of message queue where you put messages in and something that receives these messages, and you can pass data around.

With Beanstalkd you have the concept of “tubes”. You can create as many tubes as you want, you just use different names. In these tubes you can put various messages, optionally with data attached to them. I decided to have a tube for every operation: Create, Update, Delete. These are the operations that require a sync.

Now I need to teach my ORM some logic. I am using (again, because I’m lazy) PHPactiverecord, but Doctrine or Propel offer you the same functionality, so I explain the basic concept behind it: for every operation on your data, you have “hooks” that are being triggered accordingly. So let’s take creating data as an example: after_create triggers a function, which creates a message with the current data and adds it to the “Create” tube. When updating data, I use the after_save hook to add a message to the “Update” tube containing the changes and an ID both the ElasticSearch index and my database share. The same goes for deleting data, albeit this is a bit more complicated. As a general pattern, I don’t immediately purge data out of the database, but instead mark it as “deleted”. This makes deleting data actually an update operation, that has to be treated as “delete”. But trust me, it’s simple to work around this, my delete messages are all in the right tube.

So now that I have these messages nicely in the according tubes, what do I do with it?

I set up a watcher for each of these tubes. This watchers purpose is to monitor the tube for new messages. To be honest, you could monitor all the tubes at once and make your life easier, but this little bit of extra effort doesn’t hurt and gives you another advantage: the tubes might have different loads, and you are ready to scale all the watchers individually. To take the project I’m working on as an example, I expect the “Insert” tube to have way more operations than the “Update” or “Delete” tubes. In addition, I let the watcher also process the message, so the scaling thing is more important here than it would be with a clean architecture: let the watchers start processes for each message that do the heavy lifting. My messages don’t have so much data, so I’m safe on this, but if you want to parse hundreds of Megabytes of data, a more decoupled approach would be the better way to go.

If a watcher receives a message, it will be immediately removed from the tube and the data will be processed. This can take as much time as it needs, because it is happening in the background. It is sane though to use a timeout, after this the processing of a particular message should stop and the process should move on to the next message to avoid congestion of the tubes. I don’t have this yet, my messages are small and the logic is very simple.

The watchers are automatically started and monitored by supervisord. This requires super user access to the server you have your things running at. If you don't have this kind of access, you can choose any other way to fire these watchers up and monitor them, they are basically while(true) - loops.

And that’s it!

So what did I achieve? I have a searchable index of my data with all the features I needed. I have reduced database load.

Instead of one component (my database), I have four (my database, ElasticSearch, beanstalkd and supervisord) plus some duct tape code. This is not the best solution in a production environment, especially in terms of maintenance, but for what I want to achieve it is good enough. Plus it took me only a day to cobble all this together and make it run, and this is – according to the constraints written in the beginning – the optimal solution for me.

In the next post I will share some code for this kind of architecture, I just need it to look more professional and less the duct tape that it is.

If you find flaws, errors, or you just know better, let me know. I’m @gurkendoktor on Twitter, and I love to hear your feedback.

(Update 19.12. [Alex] (https://twitter.com/alexander_kiel) pointed out some unclear passages, I hope these are more comprehensible now)