Speed up mysql in rails

I was looking at the implementation of the instantiation of ActiveRecord objects from the database, and the population of the @attributes attribute. There is a method called all_hashes which generates the hashes that are used for the @attributes, so I looked at this. Generating whole hashes for each database row is a little bit expensive in time and memory. Perhaps we can do better?I note that Stefan Kaes did some work a year and a half ago on implementing all_hashes in C but it still relies on hashes being generated – I don’t think anyone has attempted what I do here.So without further ado, I present slim_attributes, the non-hash implementation of all_hashes.Here are the important but unscientific benchmarks (to give you an idea) – notice that the speed relative to using plain ActiveRecord depends on how many attributes are accessed in the model objects (because slim_attributes lazily instantiates them into strings). There were 2 models used; one had 44 and the other had 104 attributes.

Media_httppennysmalls_ffhbi

View the plugin here – and install with:script/plugin install http://pennysmalls.com/rails_plugins/slim_attributesthen follow the instructions to compile it given in the README below (yes, it should be made into a gem that compiles itself):[UPDATE: there is now a better rubygem, see here and here.]==========SlimAttributesThis is a small patch to the ActiveRecord Mysql adaptor that stops rails from using the existing all_hashes / each_hash mechanism – which is what is called when you do a find.It is faster, and uses less memory.Measuring with just ActiveRecord code – fetching stuff from the database – we see anything from very little up to a 50% (or more) speed increase, but I suppose it really depends on your system and environment, and what you are doing with the results from the database. Measure your own system and send me the results!InstallationYou’re going to need the mysql headers for this to work.

cd vendor/plugins/slim_attributesruby extconf.rb --with-mysql-configmakesudo make install

DescriptionThe reason for overriding all_hashes is threefold:* making a hash of each and every row returned from the database is slow* ruby makes frozen copies of each column name string (for the keys) which results in a great many strings which are not really needed* we observe that it’s not often that all the fields of rows fetched from the database are actually usedSo this is an alternative implementation of all_hashes that returns a ‘fake hash’ which contains a hash of the column names (the same hash of names is used for every row), and also contains the row data in an area memcpy’d directly from the mysql API.The field contents are then instantiated into Ruby strings on demand – ruby strings are only made if you need them. Note that if you always look at all the columns when you fetch data from the database then this won’t necessarily be faster that the unpatched mysql adapter. But it won’t be much slower either, and we do expect that most times not all the columns from a result set are accessed.Note that the ‘fake hash’ quacks like a hash in many ways, but not all ways. So @attributes in an ActiveRecord object may not behave as you are expecting it to, and it particularly won’t work if you try to add a key to it that is not a column name in the result set.

@attributes["not a column name"] = "something"=> RuntimeError: Key was not a column name from the result set

Hash has many methods that are not supported by the fake hash, but I found that the ones I have implemented have been sufficient for use in our Rails app. It should be fairly easy to implement most of the missing methods if needed, but I did not wish this patch to be larger than necessary.===========No warranty – this plugin should be considered experimental and likely needs some more work if you want it to be foolproof. However, that said, we are using it in our production environment with good results.==========Finally it’s interesting to note that Dan Chak wrote some code to actually return hashes from the database rather than ActiveRecord objects, when you just want the data without any fancy associations and so on. It’s much faster, proving that creating the ActiveRecord objects is fairly slow. I’ll take a look at combining this with slim_attributes – returning fake hashes should be faster still. (Combining his 50% improvement with my 50% should yield instant results 🙂UpdateI have now tested hash_extension with and against slim_attributes. My test fetched all records from two separate ActiveRecord models 100 times.

Plain ActiveRecord 38.3s
Using find_as_hashes 35.1s
Using slim_attributes 13.0s
Using both 10.4s

Clearly slim_attributes makes the biggest difference, but it should be noted that this is really the ideal case – where Model.find(:all) is done without actually accessing any of the attributes.

Advertisements

14 thoughts on “Speed up mysql in rails

  1. compiled fine in os x for me [with port and port’s mysql installed]sudo gem install slim-attributes — –with-mysql-config=/opt/local/bin/mysql_config5 As a note: it would be interesting to see if the "all hashes in C" could somehow be integrated, too :)Dunno how that would work, but hey.-R

  2. Hello,A previous commenter here pointed me to your gem in the comments for hash_extension on my blog.Since your above graph depends on how many attributes of the result you access, I was wondering what your benchmark test looks like. It looks as though the test may be counting db time, conversion to AR/Slim/Hash, plus post-processing. Would you mind posting the code? I’m surprised hash_extension gave you only an 8% improvement, but I suppose it could relate to the complexity of the objects involved, too.Anyway, I would love to see a postgres implementation of your gem so that I can try it out!Thanks,Dan

  3. How were you able to ‘combine’ hash_extension + this?Also is there a benefit of using hash_extension in that ‘later’ accesses are faster? [like calling instance.name would be faster using hashes]. Thoughts?

  4. Dan: I will email to you directly the code I am using to benchmark against hash_extension.The meat of it is like this (there are 100 items in the DB):<pre>2000.times do |n| product = Product.find(:all)[n % 100] x = product.name y = product.comment z = product.created_at end</pre>Just running it now gives these figures:Without slim-attributes: 11.133781sWithout slim-attributes with hash_extension: 9.673203sWith slim-attributes: 6.658067sWith slim-attributes with hash_extension: 5.422445sWithout slim-attributes – hash_extension improvement: 13.12%With slim-attributes – hash_extension improvement: 18.56%Without hash_extension – slim-attributes improvement: 40.20%With hash_extension – slim-attributes improvement: 43.94%With hash_extension & slim-attributes improvement over plain AR: 51.30%The table used has over 40 columns – this is realistic for our live application. If you use a smaller table then hash_extension begins to make a bigger difference than slim-attributes.I will look at writing a postgresql implementation, it shouldn’t be too hard to port.Roger: hash_extension naturally uses slim-attributes when slim-attributes is installed because it uses connection.select_all. This is what eventually calls all_hashes.The benefit in hash_extension is avoiding the active record object creation. The benefit of slim-attributes is avoiding the creation of a hash, and that the attributes are lazily created as ruby objects on demand. And you get ultimate performance by combining the two – assuming you don’t need active record objects.

  5. Very interesting about the results in conjunction and compared to hash_extension. I guess what matters is if the number of columns received is large and also if any of those columns are, for example, extremely large [and unused], both of which would seem to benefit more from slim_attributes.My latest thought with this style speedup is I wonder if you can’t do something like object cacheing…background:if I remember correctly,def action_name @@my_setting ||= Setting.find(:some_thing)endis actually quite fast for every run [except the first 😛 ]Furthermore, at least where I work, we have a few small’ish tables that we constantly query for ‘similar’ data. Say one table with 100, and another with 3000 are basically queried over and over, but typically with differing queries, and different elements returned.Also note that, as mentioned with hash_extension, lots of time you just want to view an AR object, not edit it.So I wonder if there’s an object cache around somewhere that does something like the following.a = Program.find(:some_conditions, :cache_the_individual_objects =&gt; true)b = Program.find(:some_conditions_which_will_make_it_share_some_objects_with_a, :cache_the_individual_objects =&gt; true)When it runs b, it gets back the results, then for each entry in the results, instead of immediately creating a new AR object, it first checks to see if that exact object is already existing somewhere [pre-cached]. If it is then just use it [avoid creating an AR object–just re-use the existing one].Now throw this all into an LRU cache to not cache too much and, for some queries from a common small set, I could see this resulting in a speedup. I suppose there could even be further optimizations, like querying ‘only the id’ from the DB [then figuring out which ones aren’t cached, creating objects for them] and there could be other optimizations like a C based ‘heap based’ cache, freezing the AR instance attributes, etc.Thoughts?-R

  6. Well your number 1 problem is deciding when to expire your cached objects (particularly if you run multiple instances – eg mongrels or thins). IF expiry can be handled in a simple way for your case, and IF storing all your objects in-process does not use too much RAM (don’t forget the AR associations may be stored there too) then this could be a good approach.In our app we do, for instance, cache in-memory the results to queries to our Countries table. For various reasons the available countries we wish to show do in fact change, but not that often. We use a 5 minute timeout for this cache, which means that if we change something it will show up reasonably quickly. I’ll post the code if you are interested.

  7. Sure go for it.Yeah appropriate expiry would be a concern, especially for changing tables. I didn’t worry about it too much because our tables only change like once a day, so a restart would work :)Other options would be: tracking somehow the time of the last write to each table [memcache? mysql?] to know when to invalidate local caches–which would be annoying because of the setup involved.Possibly use an updated_at column or compute a hash or checksum of each row to make sure it matches our cached version [in C, or use mysql’s md5()].In terms of fitting it into RAM, it appears that with our app, rails by default uses 50MB, and with all of the "common" objects in memory it uses 74MB. So a little bit of a hit but probably not too bad. The mongrels tend to grow to about 100MB total, over time, anyway, so it’s not out of scope. Also we could use an LRU cache to limit memory used.With regard to a time expiry, that’s also a possibility [1].An interesting thought is that if you only had one mongrel and only updated your DB through AR, you could just LRU cache everything you ever use. And never worry about having to refresh them. Which would be scary but imagine the speedup :)Just some ideas. For all I know somebody has written this plugin I just don’t know about it :)Take care.-R[1] http://www.nongnu.org/pupa/ruby-cache-README.html

  8. Ok, I had a look at our caching code – I won’t post it because it’s rather too complex for what you need because we also use the same code to cache external resources.I agree there may be some performance value in checking updated_at in C before anything is set up as ruby objects so an in-process cache can be used, but I fear it won’t just drop-in like slim-attributes does now.Anyway, we find that only part of the performance issue is solved by doing things at the AR level. Much time is also spent processing and rendering. So we also try to avoid this by using cached fragments whose expiry is tied to particular DB models or DB rows (whatever you specify).If an object changes or any object in a model changes then we automatically invalidate a bunch of cached items as appropriate (done via after_save hooks). It’s all automatic, and we know whether we need to run controller code also based on whether the view cache is valid or not.For this we employ a home grown system that is strikingly similar to this, although they were developed independently:http://blog.evanweaver.com/files/doc/fauna/interlock/files/README.html

  9. Re: Note that the mysql patch by Stefan Kaeshas been basically incorporated into the Mysql lib as of version2.8pre1MysqlPlus is a fork of 2.8pre4, so also incorporates it.Unfortunately the current gem is an earlier version and thus still creates more garbage than necessary. But anyway if people do want it they can upgrade.I’m hoping that the rails team will help with the slow rendering time. One can hope.Also does this plugin work with 2.1?Thanks!-=R

  10. Thanks for the info. I advise you to use the gem – it works perfectly with 2.1http://rubyforge.org/frs/?group_id=5954

  11. Some comments on the gem:Just throwing these out:instead of using malloc you may want to use ruby_xmalloc [so ruby can keep track of the memory growth]Also, for a small speedup you may want to cache the ids for your "@variables" and then can call rb_ivar_setLooks good though. I might roll it into an upcoming version of mysqlplus :)-=R

  12. Wow that was fast :)That makes me wary of ruby_xmalloc. Maybe it wasn’t such a great idea :)Another further thought would be:imagine that a db row will only have one column accessed for it [quite common for web driven stuff–you want the content of the blog entry, or the name of the school, and that’s all you want].Could be optimized maybe.ex: have a flag per row which says "the cache has ever been hit" then check that each time–if not you can avoid the cache check [or does it already do this?]Maybe you could cache the index of the most recently requested column.Anyway just thinking out loud.Thanks for your help. slim_attributes is WAY faster than even a C based all_hashes.-=R

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s