Archive for the ‘jruby’ Category

Prepping for @MadisonRuby with RVM JRuby and Rails

@MadisonRuby is finally here. I’m excited to attend the conference at my home state’s capital. It will be a unique opportunity to meet up with many of the contributors and characters known in the Ruby community, and to finally put a voice and face to the name. There will be plenty to learn and do at the conference, and lots of kick-ass people to meet while we’re there.

Before we get to the conference, I figured it would be wise to get my local environment prepped and ready for some mid-conference hacking sessions so I’m not scrambling to install stuff I need while I’m at the conference.

I find that searching through a ton of different blogs to find the right steps to complete can be a pain, especially when under a time crunch at a conference, when you just want to Get Things Done.

If you don’t have a jruby environment installed yet, hopefully you will find my bash script reusable for your environment. The goal is to simply install RVM, JRuby and Rails with one simple step. In addition, I have added the installation of essential Ruby gems that you will need to use in order to develop rails apps in JRuby, like the jdbc activerecord sqlite3 adapter, and even deployment/debugging gems.

This script is intended to be used from your user/home directory. If you encounter any issues using the script, please let me know as soon as possible and we can work towards fixing it before the conference this week. Enjoy! I welcome any kind of feedback. I’ll see you on Thursday, Friday and Saturday in Mad Town!

# Install RVM

curl -s https://rvm.beginrescueend.com/install/rvm;

# Place RVM in .bash_profile so that it's recognized as a command
echo "[[ -s "/Users/`echo $USER`/.rvm/scripts/rvm" ]] && source "/Users/`echo $USER`/.rvm/scripts/rvm"  # This loads RVM into a shell session." >> .bash_profile

source ~/.bash_profile;

# Install JRuby
rvm install jruby-1.6.5;

# Set JRuby paths in your .bash_profile
echo "export JRUBY_HOME=/Users/`echo $USER`/.rvm/src/jruby-1.6.5" >> .bash_profile
echo "export PATH=$JRUBY_HOME/bin:$PATH" >> .bash_profile

source ~/.bash_profile;

# Activate JRuby
rvm jruby-1.6.5;
rvm use jruby-1.6.5;

# Update the core gem system
gem update --system

# Install core gems needed for rails / jruby development
gem install rails;
gem install jruby-openssl;
gem install activerecord-jdbcsqlite3-adapter;

# Deploy with Capistrano
gem install 'capistrano'

# To use debugger (ruby-debug for Ruby 1.8.7+, ruby-debug19 for Ruby 1.9.2+)
gem install 'ruby-debug'
gem install 'ruby-debug19', :require => 'ruby-debug'

# Bundle the extra gems:
gem install 'bj'
gem install 'nokogiri'

Avoid Dependency Whack-a-mole with Bundler

We’ve all been there. At one time or another, we’ve installed a gem that has other dependencies that conflict with other gems on our system, and we end up playing dependency whack-a-mole or monkey-patching our way to a working environment. Along comes Bundler, a tool with a goal to be the solution to this problem.

Bundler assumes you have a Gemfile at the root of your project; this applies not only rails projects, but to other Ruby projects or gems as well. Entries in the Gemfile look like the following:

#Gemfile format: gem , (, :require => ) (, :git=>repo)
gem ‘activemerchant’, ‘1.4.2’
gem ‘nokogiri’
gem ‘faker’, ‘> 0.3’
gem ‘decent_exposure’, ‘~> 1.0.0.rc1’
gem ‘rspec’, ‘2.0.0.beta.20’
gem ‘sqlite3-ruby’, :require => ‘sqlite3’
gam ‘paperclip’, :git => ‘git://github.com/thoughtbot/paperclip.git’
gem ‘deep_merge’, ‘1.0’, :git => ‘git://github.com/peritor/deep_merge.git’

# specify a git repo should use a particular branch as options to git directive
git ‘git://github.com/rails/rails.git’, :branch => ‘2-3-stable’

# install from a specific branch when installing a gem inline
gem ‘nokogiri’, :igt => ‘git://github.com/tenderlove/nokogiri.git’, :ref => ‘0eec4’

#install from local code
gem ‘nokogiri’, :path => ‘~/code/nokogiri’

Once you have your Gemfile ready to go and all dependencies listed, you can run:
bundle install
to ensure all dependencies will be installed and available to your application. You may notice that this will start to install more gems than you’ve listed in your Gemfile. This is because it ensures that any dependencies of your listed dependencies are installed, and so on. Bundler is as conservative in its installation as possible, only installing dependencies with versions that do not conflict with other dependencies.

If you have listed libraries that should only be installed in certain environments, you may run:
bundle install –without development test

The default location for bundler installations is ~/.bundler. To change this, use:
bundle install

To force installation and not use previously installed gems, use:
bundle install –disable-shared-gems

Once you’ve installed gems or used bundle update, the dependency tree is stored in Gemfile.lock.
It is good practice to check in your lock file into the repository so that everyone who gets your project can be sure to also install the same exact versions of dependencies that worked most recently.

If you’re developing a Rails application, you can go even one step further to make sure all of your dependencies are packaged up with your rails application structure under vendor/cache:
bundle package

This is especially useful at deploy time, or where you need to depend on private gems not in a public repo.
Bundler has become popular with the advent of Rails 3.0 and later, but you may also want to use this handy tool with a ‘legacy’ Rails 2.3 application. Luckily, the Bundler team has documented the steps you’ll need to take to make this happen for your project.
http://gembundler.com/rails23.html

JRuby, Rails, Rake and Cron for Automation

There are times when you need to automate a particular periodic process associated with maintaining your application. Many times, these types of jobs could be performed manually, but it can easily be forgotten about, until a few weeks later when you wonder why your data is out of sync with reality. Take, for example, a process that obtains data (legally) through a third party vendor API and imports that data into an internal database so that recent information can be analyzed by users or programmatically processed in a timely manner. Without this data migration process in place, it might take way too long to be considered a usable system by any reasonable person.

Programmers know that it is not efficient to use a series of manual processes to keep a business going. All is fine and well when initially testing if your job is running correctly, but this often becomes a tedious or forgettable task. Instead, we should always seek out ways to increase the efficiency of ourselves and the efficiency of the people and systems we support. Most operating systems provide a way of at least scheduling tasks to run on a scheduled basis. If you are deploying to a *nix environment, you’re in luck, especially if the job needs to run specifically in the background.

Cron is a daemon started automatically from /etc/init.d that executes scheduled commands by searching its spool area /var/spool/cron/crontabs for crontab files named after accounts in /etc/passwd. Those crontabs should not be accessed directly; instead, use crontab -l to list a user’s crontab, and use crontab -e in order to edit a particular crontab. Cron also reads the files /etc/crontab and /etc/cron.d. It wakes up every minute to examine the crontabs and ensuring that each job has run by its scheduled time. If need be, the job is executed.

The format of cron entries is defined as the following:

.------------ minute (0-59)
| .---------- hour (0-23)
| | .-------- day of month (1-31)
| | | .------ month (1-12) OR jan,feb,mar,apr ...
| | | | .---- day of week (0-6) (Sunday= 0 or 7) OR sun,mon,tue,wed,thu,fri,sat
| | | | |
* * * * * command_to_be_executed

Cron also comes with a small list of special shortcuts as well.

@reboot   = run once at startup
@yearly   = 0 0 1 1 * = @annually = run once per year
@monthly  = 0 0 1 * * = run once per month
@weekly   = 0 0 * * 0 = run once per week
@daily    = 0 0 * * * = @midnight = run once per day
@hourly   = 0 * * * * = run once per hour

So how can use cron along with jruby? and rails?

First, you’ll need to ensure that you have a JRuby in the user’s PATH for which you’ll be using to define the cron jobs. An easy way to do this is to define the paths for JRuby, Java in the user’s .bash_profile.

$> vi .bash_profile

JRUBY_HOME=~/jruby-1.2.0-custom
PATH=$JRUBY_HOME/bin:$PATH
# :wq => to write the changes out the file and quit

$> source ~/.bash_profile

$> echo $JRUBY_HOME
/home/jrubyist/jruby-1.2.0-custom

$> echo $PATH
/home/jrubyist/jruby-1.2.0-custom/bin:
/home/demmons/Desktop/jdk1.6.0_14/bin:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Try running the job that you wish to execute once, manually, as the appropriate user to test the environment:

su -l jrubyist -c 'jruby -S vendor_api_data_import start'

If everything is running properly, you can be sure that the command you add to the user’s crontab will work.
Let’s say we wanted this import task to run every Monday, Wednesday and Friday at 6:45 pm.
You would add the corresponding entry to the user’s crontab as the following, with a comment for describing the entry:

# Automated download/migration process that makes use of the  API
45 18 * * * mon,wed,fri source /etc/profile && 
source /home/jrubyist/.bash_profile && 
jruby -S vendor_api_data_import start

Combining Cron, JRuby, Rails and Rake
The example above is all fine and dandy, but what if you want to call a rake task that needs access to say, a set of models defined for a JRuby on Rails project?. A few days ago, Felipe Coury @fcoury posed this question on Twitter: “What gem/lib/etc do you guys use for Ruby daemons that needs to load the Rails env prior to execution?” I love to browse twitter for #jruby questions so I can help out by finding answers to those questions and writing about it. There’s a fairly straightforward approach you can take to achieve this goal, and the boilerplate process is as follows:

1) Upon deploying the JRuby/Rails application, create a symbolic link to the root of the rails dir.
In the case of JRuby/Rails on JBoss, this means we want a symbolic link to the exploded war file.
2) The Rake task you create should be defined such that it depends on the rails :environment.
3) Tell the cron entry to start the jruby/rake task given the path to that symbolic link.

#1- Can be automated by using a clever trick to hook into the initialization of the rails application.
When your container deploys your rails app, as in the case of JBoss, the $servlet_context will be defined,
so a link to the deployed application directory will be created at “/home/jrubyist/deployed-rails-app

# Create /config/initializers/symlink-deployment.rb
if defined?($servlet_context) && RAILS_ENV == 'production'
  symlink_file = "/home/jrubyist/deployed-rails-app"

  current_link = nil
  if File.exist?(symlink_file) && File.symlink?(symlink_file)
    current_link = File.readlink(symlink_file)
  end

  if current_link != RAILS_ROOT
    system("ln -sf #{File.expand_path(RAILS_ROOT)} #{symlink_file}")
  end
end

#2 – Example Rake Task that depends on your Rails models:

namespace :third_party_vendor do
  namespace :api do
    desc "Uses the 3rd party vendor API to import data into our internal databases."
    task :data_import => :environment do
      # Since we say that we depend on the :environment, 
      # we now have access to our rails model objects.  For example...
      # eligible_401k_employees = Employees.find(:all, 
                    :conditions => ['effective >= ?', 1.year.ago])
    end
  end
end

If you need to have access to non-rails frozen gems as well, you will want to modify your config/environment.rb to include the following before the Rails::Initializer.run do |config| …

# Load non-Rails frozen gems too..
Dir.glob(File.join(RAILS_ROOT, 'vendor', '*', 'lib')) do |path|
  $LOAD_PATH << path
end

Some people have reported that in order to get the environment to load correctly for your rails task, they had to add the following to the top of their Rake task:

require File.join(RAILS_ROOT, 'config', 'environment.rb')

#3 – Modify your cron task so that it executes your Rake task defined in your rails app.

# Automated download/migration process that makes use of the  API
45 18 * * * mon,wed,fri source /etc/profile && 
        source /home/jrubyist/.bash_profile && 
        RAILS_ENV=production rake --rakefile 
        /home/jrubyist/deployed-rails-app/Rakefile 
        third_party_vendor:api:data_import

Finishing touches…
That should be enough to get you started. Finally, if you want your background processes to not affect your production application environment, you might consider adding “nice” to the command. nice maps to a kernel call of the same name. For a given process, it changes the priority in the kernel’s scheduler. A niceness of -20 is the highest priority, and 19 is the lowest priority. You can read more about nice on wikipedia.

Another useful feature to add to your rake task is to have the output of stdout written to a log file. That way you can go back and analyze the log file for any errors that might occur during the execution of your rake task. Create a file that is writable by the cron user, and then add the following to your cron command. The finished product is as follows:

# Automated download/migration process that makes use of the  API
45 18 * * * mon,wed,fri source /etc/profile && 
     source /home/jrubyist/.bash_profile && 
     RAILS_ENV=production nice rake --rakefile 
     /home/jrubyist/deployed-rails-app/Rakefile 
     third_party_vendor:api:data_import 
     --trace >> /home/jrubyist/logs/cron/import.log 2>&1

This technique is both useful and pragmatic. Never worry again about running a periodic process. Let the system do the work.

JRuby and SQLite3 Living Together

A few days ago I decided to download the Fat Free CRM, an open source Rails based CRM platform. In order to get it going with sqlite3 with JRuby on Rails, there were a couple things I needed to do in order to get started. This solution can be used for any general jruby/rails/sqlite3 that you may have; I only mention this particular application to give some context to the problem.

First of all, you’ll need to install a couple gems.

sudo jruby -S gem install jdbc-sqlite3
sudo jruby -S gem install activerecord-jdbcsqlite3-adapter

Next, you’ll need to configure your config/database.yml file to use the appropriate driver for sqlite3.

development:
  adapter: jdbcsqlite3
  database: db/development.sqlite3
  timeout: 5000

That should solve any dreaded “no such file to load” errors that you encounter.

JRuby Testing for Fun and Profit

If your place of employment is still considering whether to allow languages besides Java and C++ in-house, I believe the area of unit testing can be a great place to start experimenting and having fun. JRuby in particular is a great way to blend the scalability of the JVM with the concise expressiveness of Ruby. If you want to try it out in a test environment before writing a full blown application in JRuby on Rails, for example, I’ll be discussing some JRuby test tools that you can use to boost your TDD experience. Perhaps your boss will come to appreciate the readability of the code and encourage more use of JRuby in the future.

In my last post, I mentioned how you can *measure* the quality of your code. That is a great technique for finding specific trouble spots in your code; you might have duplicated code, you might have an over-complicated algorithm, or you could simply perform the same task with fewer branches and fewer lines of code with a new design.

After you take the time to generate these metrics, you need to be able to make use of the data. Some people get defensive about what the metrics might say about their code. This is a common reaction among junior software developers. I have noticed that people tend to warm up to constructive criticism with more experience. The only way to improve your understanding of programming and to develop yourself as a professional is to understand what you are doing wrong, and then take steps to improve those areas.

Think of these metrics as a set of huge neon arrows pointing at some of the most problematic areas of your code base. Perhaps you as an individual understand some really clever technique in your code, but imagine if you needed to explain it in a code review or make changes to it 6 or 9 months from now. Would you really still understand it? Could it be written better? Are the code metrics knocking on your door to ask you to revise it? If so, it is a good idea to make sure you don’t break anything else when trying to improve that part of the code.

If you and your team have adopted TDD or BDD and have stuck to it on your greenfield project, then you are in great shape. You have some benchmarks in place to measure processing time, you may have clearly defined requirements, and you know when one of your acceptance tests will fail. You probably have a huge set of tests, with 80% code coverage. That’s how the world would work if it was perfect…. Wait, this doesn’t match your situation? Unfortunately, this is another point at which teams get defensive. “There wasn’t enough time when we started the project.” “TDD is unrealistic and too dogmatic. I’m more _agile_ and I need to be able to change my code quickly.” (Have you ever noticed the word “agile” is often used incorrectly, and has become a buzz word?) To truly be agile, you need to both be able to change your code to adapt to new requirements or make optimizations, and at the same time ensure you are not introducing a new bug in the process.

Whether you and your team are in the first camp (continuous integration with critical code covered) or in the second camp (no tests, just “production” testing)… or somewhere in-between, there are some great tools available to you to improve your testing experience.

Test::Unit
The xUnit tool had its beginnings with Smalltalk, and seems to be the most ubiquitous testing tool across developers from different generations and languages. It involves creating a set of test suites consisting of test cases. Each test case is a large set of methods beginning with test_, and each test method is supposed to focus on a particular aspect of your code. Here is an example that tests an instance of an Amounts class. The amounts instance acts as an accumulator that can be queried for average “win” (positive) values, “loss” (negative) values and total

require 'test/unit'

class AmountsTest < Test::Unit::TestCase
  def setup
     @amounts = Amounts.new
  end

  def test_something
     @amounts.clear
     assert_equal( 0, @amounts.total )
     assert_equal( 0, @amounts.average_win )
     assert_equal( 0, @amounts.average_loss )
     assert_equal( 0, @amounts.average )
     assert_equal( 0, @amounts.average { |value| value % 2 == 0 } )
  end

  def test_with_one_negative_amount
     @amounts.clear
     @amounts.add( -5 )
     # perform more assertions...
  end

   def test_with_one_positive_amount
     @amounts.clear
     @amounts.add( 2 )
     # perform more assertions...
   end

   def test_with_multiple_positive_amounts
     @amounts.clear
     @amounts.add( 3 )
     @amounts.add( 2 )
     # perform more assersions...
   end

   def test_with_multiple_negative_amounts
     @amounts.clear
     @amounts.add( -7 )
     @amounts.add( -2 )
     # perform more assersions...
   end

   def test_with_multiple_mixed_posistive_and_negative_amounts
     @amounts.clear
     @amounts.add( -8 )
     @amounts.add( 5 )
     @amounts.add( 2 )
     @amounts.add( -1 )
     @amounts.add( 13 )
     # perform more assertions
   end

   # this is beginning to get tedious...
end

Every test method is able to share a tiny setup method, but each test must make sure it is working with an empty version of itself by clearing out the instance before setting up some data in the instance it is testing with. This almost makes the initial setup method useless, as it would be just as easy to simply recreate the instance every time. Also note how it begins to get fairly tedious to write code to clear out the instance and add more values to the @amounts object in each test. You could write helper methods to set up your data, but then you run the risk of losing connascence of location, and the test data could become too separated from the test itself.

Another thing that is lacking is the elegance of a DSL for testing. All these steps involved in setting up an object and running static-like assertions against results. It does the job of testing for accuracy, but this solution doesn’t really do a great job of telling the user of the API (or just yourself 6 months down the road) why it behaves the way it does. If you agree, you are not alone. Many other developers have noticed this shortfall and have longed for something better.

Along came RSpec, a gem that allows Ruby/JRuby developers to use a DSL for describing examples of the expected behavior of the domain object. You can get started with it by

gem install spec

. The documentation for RSpec includes a basic example of a scenario between you and a customer, and how that could be translated into a DSL based test with RSpec:

#You: Describe an account when it is first created.
#Customer: It should have a balance of $0.

describe Account, "when first created" do
  before(:all) do
      @account = Account.new(:balance=>0)
  end

  before(:each) do
      @account.balance = 0
  end

  it "should have a balance of $0" do
     @account.balance.should == 0
  end

  # ...
end

RSpec also introduces what appears to be its own syntax to testing, which is sort of a controversial concept in the testing. Do you really want to confuse other developers by using an unfamiliar set of test functions, when they already have enough to worry about when they are trying to learn your codebase?

When RSpec sees ‘be_’ in a matcher (after ‘should’), it looks for a method with a name that follows ‘be_’ and has a ‘?’ at the end. In this case ‘be_lower_case’ makes RSpec look for a method called ‘lower_case?’ and calls it. That is clever, but I actually find that syntactic sugar to be a little distracting. By using clever tricks like this, it can make your tests read more like human language, but ask yourself this: Do you want to test your API, or do you want to test the code that is testing your code? When your test breaks as a result of a code change, it is better to spend time on testing the code and not worrying that it might be that your test is the ‘thing’ that is broken.

Don’t get me wrong; RSpec is a great tool and gets us closer to BDD, but I think it tries to go one step too far with the matchers. Domain experts don’t speak in ‘plain english’, like the purist BDD evangelists would like. As PragDave explained, they speak jargon, a specialized set of vocabulary used by industry experts to communicate in whatever language they know. It seems a bit strange to attempt writing tests in almost_but_not_quite_english directly_in the(code). It makes me think too much about the language of testing rather than the code I am testing.

So is there a happy medium? I think that there is. On my quest to find a great set of tools for testing, I came across Thoughtbot’s Shoulda. Forgetting about some of the nice helper methods for integrating with Rails and ActiveRecord, I simply wanted to use Shoulda for creating extremely readable (and easily writable) unit tests in a DSL style format. Shoulda consists of test macros, assertions, and helpers added on to the Test::Unit framework. It‘s fully compatible with your existing tests, and requires no retooling to use.

sudo gem install thoughtbot-shoulda --source=http://gems.github.com

One of the really cool (and highly pragmatic) features of Shoulda is its use of nested contexts. To quote (Prag)Dave Thomas, “the outer setup gets run before the execution of each of the inner contexts. And the setup in the inner contexts gets run when running that context. And shoulda keeps track of it all, so I get very natural error messages if an assertion fails.”

class UserTest < Test::Unit::TestCase
    context "A User instance" do
      setup do
        @user = User.find(:first)
      end

      should "return its full name" do
        assert_equal 'John Doe', @user.full_name
      end

      context "with a profile" do
        setup do
          @user.profile = Profile.find(:first)
        end

        should "return true when sent #has_profile?" do
          assert @user.has_profile?
        end
      end
    end
  end

Produces the following test methods:

“test: A User instance should return its full name.”
“test: A User instance with a profile should return true when sent #has_profile?.”

The example above was ripped right off from the Shoulda website for a quick, clear example of what is going on. I thought Shoulda’s readability and simplicity just blended well with how my mind works, but really, the choice is yours. I think the fact that it also blends seamlessly with test/unit is a huge plus. You can combine it with regular test cases if you’d like (but why would you, other than to be backwards compatible?).

Conclusion
In the end, what seems to work really well for me and my team is a best-of-all-worlds approach. Test/Unit is a very recognizable test framework that developers with diverse language backgrounds are all familiar with, so it seems like a natural starting point. Plenty of tools are available to automate the running of those tests in a continuous integration environment and visually reporting the results of passed/failed tests in an automated way. I should note that RSpec also has ways of hooking up into automated tools, but it does require a (very small) additional step, usually a Rake task to hook into.

I love the way Shoulda works seamlessly with test/unit, and makes test cases more readable and understandable. A killer feature of Shoulda is the concept of nested contexts, which keeps your tests extremely DRY, and allows you to read the contexts almost as if they were full sentences that describe an entire use case of your application.
Finally, I really like having the ability to use the “should” method from RSpec, but only in a way that keeps the test code extremely readable (and writable). In the end, whatever frameworks and libraries you choose to use (and choose not to use) will help you improve your code style, your coding accuracy, and your ability to communicate the intention of your code among other individuals.

Remember, every test you spend time writing today will save you time in the future. Each test you write is like a little guardian, protecting you from mistakes made by others, and even mistakes made by yourself. The initial work is definitely worth the effort involved. If you are looking to gain respect from your peers in your profession, you owe it to yourself and those around you to treat your profession with respect and hone your craft by your pursuit of continuous improvement of your code and of yourself.

Full Example Mixing Test/Unit, RSpec and Shoulda

class AmountsTest < Test::Unit::TestCase
  # instance_methods "<<", "average", "average_loss", "average_win", "description", "size", "success_rate", "total"
  context "An Amounts instance" do
    setup do
      @description = "B"
      @amounts = Prices::Amounts.new(@description)
    end

    should "describe itself" do
      @amounts.description.should == @description
    end

    should "have a total of 0 when first created." do
      @amounts.total.should == 0
    end

    should "have a size of 0 when first created." do
      @amounts.size.should == 0
    end

    context " after adding an amount that is positive " do
      setup do
        @single_amount = 4
        @amounts << @single_amount
      end

      should " have a total equal to the single amount." do
        @amounts.total.should == @single_amount
      end

      should " have an average win equal to the single amount." do
        @amounts.average_win.should == @single_amount
      end

      should " have an average loss equal to 0." do
        @amounts.average_loss.should == 0
      end

      should " have a success rate of 100% " do
        @amounts.success_rate == 100
      end

      should "have a size of 1." do
        @amounts.size.should == 1
      end

      should "have a total of single_amount." do
        @amounts.total.should == @single_amount
      end

      context " and adding another amount that is negative " do
        setup do
          @another_amount = -3
          @amounts << @another_amount
        end

        should " not have a total equal to the other amount. " do
          @amounts.total.should_not == @another_amount
        end

        should " have a total equal to the single_amount and another_amount. " do
          @amounts.total.should == (@single_amount + @another_amount)
        end

        should " have an average_win of the first single_amount " do
          @amounts.average_win.should == @single_amount
        end

        should " have an average_loss of another_amount " do
          @amounts.average_loss.should == @another_amount
        end

        should " have a success rate of 50%" do
          @amounts.success_rate.should == 50
        end

        should " have an average( any_not_nil ) of (single_amount + another_amount)/2.0" do
          @amounts.average { |any_not_nil| any_not_nil}.should == 
                                        ( (@single_amount + @another_amount) / 2.0 )
        end

        should "have a size of 2." do
          @amounts.size.should == 2
        end

      end
    end
  end
end

JRuby Code Quality

Lots of people think that they are the best coders in the world, or that there is absolutely nothing to improve in a codebase for a particular gem, plugin or library. Until you put your code to the test, you have no metrics from which to base your claim upon. Smarticus just pointed me at a set of tools I can use to put my money where my mouth is, or to at least bring us all back down to reality. We all have code that is stellar, but portions of that code might just be facades to ugly procedural spaghetti.

In order to get your code back into shape, you might need help from a small set of tools to point you in the right direction. This is especially true in medium to large codebases, where wandering around aimlessly in the code sniffing out code smells might lead to a wasted effort. Time is a precious commodity, so you should want to spend it as efficiently as possible. Not to mention, it will make the process a lot more fun.

Three tools I find very useful are flog, flay and roodie.

flog (Ruby Sadist)
Flog shows you the worst, most painful code to test, read or maintain.
You’ll get a higher score the more painful your code is to look at.

In order to run flog on all your code. Try this:

find lib -name \*.rb | xargs flog

You might be amazed at what you find. Bryan Liles (smartic.us) recently mentioned in a conference that nobody gets 0-10, and most of his team’s code is 10-20. 20-50 should be refactored. Above 50 should be rewritten. Lucky for me, mine started out at 9.9, but I still had some improvements to make, and I’ve been able to improve it to 9.3. Normally I don’t brag, but Bryan kind of antagonized me. 😉  This was also run in a complex codebase, where the goal of the project is to come up with auto-generated trades based on analyzing up to 30 years of stock history.  Not a simple problem to solve by any means, but rewarding when it works and you know it is designed correctly.

flay
Flay does a great job of analyzing ruby code for structural similarities.
If you’re on board with the Ruby way, you’ll want to keep your code DRY.
Flay will report back to you a set of code that is a good candidate for refactoring.

 sudo gem install flay
 flay lib/*.rb

roodi
Roodi measures cyclomatic complexity (CC = E – N + P).

E represents the number of edges on a graph.
N would be the number of nodes, and
P is the number of connected components.

Each conditional decision point in a method is counted. Another is added for the method’s entry point, resulting in an measurement denoting a method’s complexity.

Conclusion
The art and science of code metrics can be a very useful way to keep your code organized, simple to follow and DRY. It is important to remember that code metrics are not a goal, they are an aspiration. Even if you have fantastic metrics, but the code does nothing useful, you just have a big ball of code that nobody’s interested in. I would recommend to stay pragmatic, and use the results of your metrics as a guide in your journey.

Handy JRuby / JMX MBean Info

I was tired of using the jmx-console and clicking through multiple pages, so
I created a little wrapper around the JMX connector to obtain server info data for my app servers…

http://appserver:8080/systems?host=my-box&port=12345 will report back the top 15 threads and the stack trace and a full dump of the running threads.
I created this so that I can get all the info back in one shot. Using the jruby gem Much nicer than going through all the hoops of clicking around the web console.

I plan on adding more nice little things to get a cool little dashboard with minimal effort.

SystemsController < ApplicationController
  def show 
    system = System.new(params[:host],params[:port])
    render :text => system.query_state
  end
end

class System
   def initialize(host, port)
      @host, @port = host, port
   end

   def query_state
      JMX::MBean.establish_connection(:host=>@host, :port=>@port)
      info = JMX::MBean.find_by_name "jboss.system:type=ServerInfo"
      strings = info.list_thread_cpu_utilization
      report = "Top 15 Thread Utilization "
      report << "#{strings.split(" ")[0..16].join(" ")}"
      report << "Thread Dump #{info.list_thread_dump}"
   end
end