Automating Octopress Deployment on Heroku

Octopress is a framework designed for Jekyll, the static blogging engine powering Github pages. It also makes it easier to deploy on other platforms like Web Hosting or Heroku. Heroku deployment requires the least amount of effort; just check-in your Octopress blog repository on Heroku and you're good.

This Heroku deployment strategy comes with a few constraints:

  • Harder get contribution for your blog repository from other hackers.
  • Every small change will require you to push code onto Heroku Git.
  • Harder to track changes if a build breaks. Which happens whenever you are trying out a new theme or modifying current config.
  • Its preferred to associate a single piece of infrastructure with only one responsibility; but with Heroku we have a single monolithic repository is serving both as blog generating engine and blog itself.

Here are my solutions to these constraints:

  • To make contribution easier you can have blog source code on Github or Bitbucket. These services equip you with a project wiki, a issue tracker and version control system like Git or Mercurial; with blog source code open to other hackers, they can easily contribute to your blog by creating pull requests.
  • I prefer to use git-flow to track changes and add new features to project. It is helpful in creating feature branches and releases using git tags.
  • It is preferred to keep a separate staging and production environment, so if a build breaks it'll only affects staging. I separate by syncing Github repository's 'develop' branch with Heroku 'staging' environment and 'master' branch with Heroku 'production' environment.
  • Singularity in infrastructure can be achieved by keeping blog source code on Github and generated static pages with essential config on Heroku Git.


For these solutions to work you need to make some modifications to octopress default Rakefile.

Aside from that you have to create a barebones Heroku application. Also keep thing simple, we are only going to focus on Heroku production environment i.e. no setup for staging.


Checking-in a placeholder index and 404 error page to Heroku Git.

rake setup_heroku[]


Finally deployment is easy with:

rake deploy

It will generate static pages and commit changes to Heroku Git.



Ranjeet is an engineer at C42 Engineering. If you liked this post, please consider...


Making contributing to Ruby easier with ruby-build

When contributing to Ruby, building, installing and comparing your own version against the latest stable build can become a pain point. Rbenv’s ruby-build plugin makes this dead simple.

How ruby-build works is, it has a script corresponding to each Ruby version in the ruby-build repo. This collection of scripts is what is shown when you execute rbenv install -l. The corresponding script is executed using rbenv install <version-name>. So we just have to add a script that builds our local Ruby.

The steps involved in building Ruby source might differ with the Ruby implementation (MRI, JRuby, Rubinius, etc). Roughly speaking, the steps are

  1. Download the source
  2.  Set the flags
  3. Run autoconf
  4. Install OpenSSL
  5. Actual build
  6. Verify OpenSSL installation

To get ruby-build to build the local Ruby source, step 1 will have to be changed.

I added functions install_local and fetch_local to the ruby-build script. install_local passes the right set of arguments to the build script. fetch_local copies the source to the temp folder.

I also created a file called local in the /share/ruby-build folder inside the ruby-build parent folder. This file invokes the functions necessary to build our source.

Depending on the Ruby implementation, the parameter standard in the 2nd line of the script might change to jruby or maglev. Similarly, the autoconf option might also not be required.

And we are set. The existing functions in the ruby-build script take care of building the code for us. You can see our custom script in the list of options when you execute rbenv install -l.

Shishir is an engineer at C42 Engineering. If you liked this post, please consider...


Outsourcing engineering: Unfunded Pre-MVP Products

In this series of posts, I will talk about some of the common gotchas in outsourced product development for products at different stages of maturity.

Today's post discusses unfunded, pre-MVP products.

At this stage, the entrepreneur typically has a set of hypotheses based on some research that now need to be validated on top of working software.

Common Constraints

  • Limited budget
  • Lack of data on customer behaviour and segmentation
  • Lack of one or more strong customer acquisition channels


Scope creep and delayed launches are typical problems that must be guarded against, especially since most have limited budgets. The absence of reliable data about user behaviour make most feature decisions uncertain.

As a consequence, making plans or commitments to clients, partners or investors at this stage is unwise. If commitments are necessary, should be discussed with the engineering team before hand for feasibility.

Often, the entrepreneur seeks to reduce risk by going a fixed-bid model instead of time-and-materials. With the level of uncertainty inherent in most pre-MVP products, this can create a negative feedback loop. Any failure reduces the margin of the vendor, which in turn forces them to cut corners. Cutting corners increases the number of failures. This takes the product into a vicious downward spiral, often destroying the client-vendor relationship in the process.


Typically, we see the MVP for a product that consists of simple business workflows and involves no complex algorithms or computer science research taking 8 to 12 weeks.

The focus is on reliably going to market within the budget available so that actual customer and product behaviour can start to be measured.

Speculative and/or expensive features are better deferred until there is data to support their value.


Engineers should be solely dedicated to only one such MVP project at a time. If your outsourced team rotates engineers on and off teams frequently, the quality of the codebase is at risk. This is especially true of junior engineers.


The vanilla XP process with one week iterations has served us well as a starting point.

Effective communication between the product manager and the engineering team is critical in this stage. As the needs of the product are not always clear at this stage, having the engineering team empathize with the high level goals of the product is critical. Distributed teams espcially should find a common time to talk at least once a day.

Defects of the we-didn't-understand-this-requirement-correctly or this-requirement-is-incomplete will be common. The product manager should expect to spend a significant amount of time addressing these issues with the engineering team. Iterate over restructuring how requirements are communicated until these defects decrease.

A trend of defects of the it-used-to-work-but-is-now-broken variety in a codebase this young are a strong indicator that the engineering teams has poor quality control. Keep a careful eye on such a trend because with a timeline of 8 to 12 weeks, it's very easy to go significantly over budget due to poor quality.

Sidu is a Partner at C42 Engineering. If you liked this post, please consider...


A Better Way To break backward compatibility

I have been using third party libraries and platforms for more than a decade now. I have maintained 10 year old Java systems at one end and upgraded Rail 2 codebases to Rails 4 on the other. After all this, one can't help but appreciate the convenience of having backward compatibility. But nonetheless, there are times when breaking backward compatibility is the right thing to do.

The pros and cons of backward compatibility itself have been discussed innumerable times and developers will keep arguing over it for the foreseeable future. But there is something more important that is seldom discussed. If you have to break backward compatibility, what's the best way to do so?

One of the best examples I have seen for this is RSpec version change from 2.x to 3.x. Due to the overheads and bugs added by RSpec's magical syntax, the core team decided to move to something more explicit. This meant, a fundamental change in how every test would be written. There were structural changes to core concepts like described_class method's behavior. There were syntax change for most of the matchers. In RSpec 2.x, methods like let or subject could be used in after(:all), but no longer in RSpec 3.x. Developers would have been forced to change more than 70% of their test codebase to upgrade to 3.x.

RSpec team made this mammoth task substantially easy by following these principles:

1. Make it easy to find & fix required code changes

Deprecation warning is certainly not a new concept, but few libraries use it well. The crux of a smooth upgrade is planning.

RSpec team started pushing out deprecation warnings 8 months before the first 3.x version was out. 8 months might seem insufficient for someone from Enterprise Java background, but that's respectable in the Ruby ecosystem.

Every warning came with a clear solution which could be used verbatim to fix that particular deprecation. There was a rich feature set around the deprecation warnings management. By default, test executions wouldn't show you duplicate warnings. Because of this, the warnings didn't clutter your console while working. It was just meant to tell you what all syntax would be discontinued in the future releases. It also had a flag for exporting all deprecation warnings to an external file.  Now if you wanted, you could start fixing one warning after another by following the exported warnings file.

The focus was on helping developers identify & fix deprecations. At the same time, their day to day work did not get hampered due to the sheer number of deprecation warnings.

2. Provide tools to automate grunt work

Around the same time, Transpec was introduced. Transpec was a dedicated gem to help developers upgrade from 2.x syntax to 3.x.

It wasn't a regex matching tool that would mindlessly change every should with expect. It used static and dynamic code analysis to provide an accurate conversion. The tool understood the difference between mock variable declared in a test vs mock method. In the absence of this tool, developers would have spent days changing their specs line by line.

This tool also respected developers' coding styles while doing these conversions. It made sure that the indentation, line breaks and use of any other punctuations are not changed. Developers loved it because of this.

3. Make it easy to maintain the code changes

Just because you have fixed all the deprecation warnings in your codebase doesn't mean you are ready for upgrade. May be the major version is still not released or you want to wait for a few months before you see performance metrics in the real world.

It's easy to slip back to old syntax and practice in this period if you are not careful. RSpec helped the teams by providing a configuration that will treat deprecation warnings as errors. This feature makes sure that you are not slipping on the syntax set for RSpec 3.x.

To summarize, people object to upgrades when the cost of an upgrade surpasses its benefits. A well thought through plan and helpful tooling can substantially reduce this cost. With more and more authors embracing this change management, hopefully we will see more people embracing the change itself.

Aakash is a Partner at C42 Engineering. If you liked this post, please consider...


Removing An Included Module From A Class IN RUBY

Q - What happens when a module is included in a class?
A - As per the documentation, Module#append_features is called by Module#include which in turn adds the constants, methods and module variables.

Q - So how the members of a module be removed from a class?
A - Looks like there are two options. Use Module#undef_method or Module#remove_method. Module#undef_method prevents the class from responding to calls from the method. Module#remove_method completely removes the method from the class. What this means is that Module#undef_method will cause a class to throw NoMethodError even if the method is defined in a super class. Module#remove_method will just remove the method from the class leaving the super class methods intact.

Module#remove_method looks a probable solution. So let's set up a test where a module called Print that gets input and puts output.

Let's create a class Hello that uses this module to get the name of the user and greet the user.

Lets add an exclude method to Module

And finally, the actual test

When this test is executed the console output is

Q - So Module#include is not literally including the methods in the Print module in Hello class?
A - Yes and Ruby seems to be doing something else that makes it seem so. Thankfully, Ruby is Open Source and this function in MRI from the class.c file will answer our questions.

A cursory look at this function tells us that at line number 31 & 32, the module is inserted into the class hierarchy. This can be verified by printing Hello.ancestors

Q - Does Ruby provide a way to alter the class hierarchy?
A - No. So we'll have to make do with using Module.undef_method instead of Module.remove_method in our implementation of Module.exclude. After swapping out remove_method for undef_method the exclude method works.

Oh and if someone is interested in building an Object#unextend, Object#extend in turn calls Module#include so the procedure should be almost the same as for Module#exclude.

Shishir is an engineer at C42 Engineering. If you liked this post, please consider...