Thursday, July 11, 2013

Better Coding Techniques Lower RubyMine's Runtime Memory Requirements

For some coding tasks, I use RubyMine.

RubyMine has great features, but is known for being sluggish and hogging RAM.

Sometimes, tweaking RubyMine's memory configuration file will improve your performance.

RubyMine Memory Configuration

Here're the configurations that typically work well for me:

-Xms128m
-Xmx1024m
-XX:MaxPermSize=500m
-XX:ReservedCodeCacheSize=128m
-XX:+UseCodeCacheFlushing
-XX:+UseCompressedOops


If you have installed previous versions of RubyMine be sure that you're configuring the correct idea.vmoptions file

Search your file system with something like this:


sudo find / -maxdepth 15 -name "idea.vmoptions" -type f 2>/dev/null)


If RubyMine starts to run out of memory it will warn you and give you a dialog window that will directly modify the correct .vmoptions file.

And as with just about everything, what you should set yours to depends...

Impact of Coding Techniques

Your coding techniques will also affect your requirements.

For example, if your code is littered with Model.each do ... constructs, then RubyMine will appear to be a memory hog.

However, if you simply replace your Model.each do's with Model.find_each do's then you'll likely see that RubyMine (or whatever Rails runtime you're using) is no longer gobbling up so much of your computer's memory.

The reason is that the Model.each do ... construct, ActiveRecord will run a query for all the records and load the results in memory; Whereas, User.find_each(:batch_size => number_of_records_in_batch) do ... only loads number_of_records_in_batch at a time.

Verification

Don't take my word for it; Rather, open up the Rails console and try it out yourself:

Using .find_each


>> User.find_each(:batch_size => 100) do |user| end
  User Load (9.3ms)  SELECT "users".* FROM "users" WHERE ("users"."id" >= 0) ORDER BY "users"."id" ASC LIMIT 100
  User Load (5.3ms)  SELECT "users".* FROM "users" WHERE ("users"."id" > 100) ORDER BY "users"."id" ASC LIMIT 100
  User Load (6.2ms)  SELECT "users".* FROM "users" WHERE ("users"."id" > 200) ORDER BY "users"."id" ASC LIMIT 100
  User Load (4.6ms)  SELECT "users".* FROM "users" WHERE ("users"."id" > 300) ORDER BY "users"."id" ASC LIMIT 100
  User Load (4.0ms)  SELECT "users".* FROM "users" WHERE ("users"."id" > 400) ORDER BY "users"."id" ASC LIMIT 100
  User Load (4.1ms)  SELECT "users".* FROM "users" WHERE ("users"."id" > 500) ORDER BY "users"."id" ASC LIMIT 100
  . . .

Using .each


>> User.all.each do |user| end
  User Load (416.9ms)  SELECT "users".* FROM "users"
  EXPLAIN (0.6ms)  EXPLAIN SELECT "users".* FROM "users"
EXPLAIN for: SELECT "users".* FROM "users"
                               QUERY PLAN
------------------------------------------------------------------------
 Seq Scan on users  (cost=0.00..1253.72 rows=7372 width=2726)
(1 row)


Your first clue that using .each could be a performance bottleneck is the fact that the RDBMS felt like it was necessary to EXPLAIN why that query was taking so long.

So, take it easy on RubyMine and write better code.

No comments:

Post a Comment