Tech Resume Writing 101 for Indian Grads

I've been working my way through hundreds for fresh grad resumes these last few months.

Sadly, most resumes I see actually hurt the candidate rather than help, which defeats the point of the resume and makes my life harder.

So here's some advice for fresh grads around resume building, especially if you're applying to a tech firm that actually cares about tech.

Your resume is an advertisement

Imagine your job is to go through dozens of ads every day. The ads are for different products in the same space, and you need to identify which of those products you're going to buy based on the ads.

This is what recruiters in every company do day after day, except your resume is that advertisement.

Don't bore them with a shitty ad identical in content to every other shitty ad they see.

Conformity is the enemy

The objective of your resume is to help you stand out from the crowd.

Trying to stand out by conforming to what your classmates or friends are doing is simply... stupid.

As I've said before, "Doing what most do leads to an average case outcome".

Your 'Objective' is bullshit

The objective seems to be a standard in most resumes, and is often extremely off-putting. This is awful given that it is the first thing about you that the recruiter sees.

Here's a typical objective:

To contribute my skills sets to the organization to achieve the goals and targets that enhance my professional and personal growth.

Mmm. Ok. You and everybody else.

A content-free objective section tells a recruiter is that you lack an objective.

Please take the time to carefully think this through - it's the first thing in your resume and sets the tone for everything that follows. I would go so far as to recommend writing a fresh, carefully crafted objective for each company you're applying to that speaks to that company's needs.

Academic performance

This section is often the only useful section in the resumes of most candidates.

Don't mess with this.

Prior art matters

This should be obvious: If you're applying to a tech startup, showcase your skills. Make sure you put all the code you've written front and centre.

Open source contributions, hobby projects and anything else that prove your ability to code result in an instant interview with us at C42, irrespective of academic performance.

Finally, this is 2015. A strong online presence is de rigueur for a tech resume. You should be active on Github, StackOverflow, Open Source User Groups, TopCoder and so on.

Extra curricular

I just went through ~20 resumes from a batch where everyone had exactly the same extra curricular activities.

Stop copy-pasting. It is hurting you.

Extra curricular activities help build up a more complete picture of you, so they are useful, but please put successes at the top and give them attention.

Please also try to avoid citing achievements from your fourth standard painting contest.

The achievements must be significant, recent or (ideally) both.

If you've succeeded in tech related activities, please highlight these.


If your goal is to get a prospective employer's attention through your resume, don't do it by making your resume look like everyone else's.

This doesn't mean changing what colours you use. It means understanding that you're making an advertisement - a tasteful, understated one, but an advertisement nonetheless.

Sidu is a Partner at C42 Engineering. If you liked this post, please consider...


Automating Octopress Deployment on Heroku

Octopress is a framework designed for Jekyll, the static blogging engine powering Github pages. It also makes it easier to deploy on other platforms like Web Hosting or Heroku. Heroku deployment requires the least amount of effort; just check-in your Octopress blog repository on Heroku and you're good.

This Heroku deployment strategy comes with a few constraints:

  • Harder get contribution for your blog repository from other hackers.
  • Every small change will require you to push code onto Heroku Git.
  • Harder to track changes if a build breaks. Which happens whenever you are trying out a new theme or modifying current config.
  • Its preferred to associate a single piece of infrastructure with only one responsibility; but with Heroku we have a single monolithic repository is serving both as blog generating engine and blog itself.

Here are my solutions to these constraints:

  • To make contribution easier you can have blog source code on Github or Bitbucket. These services equip you with a project wiki, a issue tracker and version control system like Git or Mercurial; with blog source code open to other hackers, they can easily contribute to your blog by creating pull requests.
  • I prefer to use git-flow to track changes and add new features to project. It is helpful in creating feature branches and releases using git tags.
  • It is preferred to keep a separate staging and production environment, so if a build breaks it'll only affects staging. I separate by syncing Github repository's 'develop' branch with Heroku 'staging' environment and 'master' branch with Heroku 'production' environment.
  • Singularity in infrastructure can be achieved by keeping blog source code on Github and generated static pages with essential config on Heroku Git.


For these solutions to work you need to make some modifications to octopress default Rakefile.

Aside from that you have to create a barebones Heroku application. Also keep thing simple, we are only going to focus on Heroku production environment i.e. no setup for staging.


Checking-in a placeholder index and 404 error page to Heroku Git.

rake setup_heroku[]


Finally deployment is easy with:

rake deploy

It will generate static pages and commit changes to Heroku Git.



Ranjeet is an engineer at C42 Engineering. If you liked this post, please consider...


Making contributing to Ruby easier with ruby-build

When contributing to Ruby, building, installing and comparing your own version against the latest stable build can become a pain point. Rbenv’s ruby-build plugin makes this dead simple.

How ruby-build works is, it has a script corresponding to each Ruby version in the ruby-build repo. This collection of scripts is what is shown when you execute rbenv install -l. The corresponding script is executed using rbenv install <version-name>. So we just have to add a script that builds our local Ruby.

The steps involved in building Ruby source might differ with the Ruby implementation (MRI, JRuby, Rubinius, etc). Roughly speaking, the steps are

  1. Download the source
  2.  Set the flags
  3. Run autoconf
  4. Install OpenSSL
  5. Actual build
  6. Verify OpenSSL installation

To get ruby-build to build the local Ruby source, step 1 will have to be changed.

I added functions install_local and fetch_local to the ruby-build script. install_local passes the right set of arguments to the build script. fetch_local copies the source to the temp folder.

I also created a file called local in the /share/ruby-build folder inside the ruby-build parent folder. This file invokes the functions necessary to build our source.

Depending on the Ruby implementation, the parameter standard in the 2nd line of the script might change to jruby or maglev. Similarly, the autoconf option might also not be required.

And we are set. The existing functions in the ruby-build script take care of building the code for us. You can see our custom script in the list of options when you execute rbenv install -l.

Shishir is an engineer at C42 Engineering. If you liked this post, please consider...


Outsourcing engineering: Unfunded Pre-MVP Products

In this series of posts, I will talk about some of the common gotchas in outsourced product development for products at different stages of maturity.

Today's post discusses unfunded, pre-MVP products.

At this stage, the entrepreneur typically has a set of hypotheses based on some research that now need to be validated on top of working software.

Common Constraints

  • Limited budget
  • Lack of data on customer behaviour and segmentation
  • Lack of one or more strong customer acquisition channels


Scope creep and delayed launches are typical problems that must be guarded against, especially since most have limited budgets. The absence of reliable data about user behaviour make most feature decisions uncertain.

As a consequence, making plans or commitments to clients, partners or investors at this stage is unwise. If commitments are necessary, should be discussed with the engineering team before hand for feasibility.

Often, the entrepreneur seeks to reduce risk by going a fixed-bid model instead of time-and-materials. With the level of uncertainty inherent in most pre-MVP products, this can create a negative feedback loop. Any failure reduces the margin of the vendor, which in turn forces them to cut corners. Cutting corners increases the number of failures. This takes the product into a vicious downward spiral, often destroying the client-vendor relationship in the process.


Typically, we see the MVP for a product that consists of simple business workflows and involves no complex algorithms or computer science research taking 8 to 12 weeks.

The focus is on reliably going to market within the budget available so that actual customer and product behaviour can start to be measured.

Speculative and/or expensive features are better deferred until there is data to support their value.


Engineers should be solely dedicated to only one such MVP project at a time. If your outsourced team rotates engineers on and off teams frequently, the quality of the codebase is at risk. This is especially true of junior engineers.


The vanilla XP process with one week iterations has served us well as a starting point.

Effective communication between the product manager and the engineering team is critical in this stage. As the needs of the product are not always clear at this stage, having the engineering team empathize with the high level goals of the product is critical. Distributed teams espcially should find a common time to talk at least once a day.

Defects of the we-didn't-understand-this-requirement-correctly or this-requirement-is-incomplete will be common. The product manager should expect to spend a significant amount of time addressing these issues with the engineering team. Iterate over restructuring how requirements are communicated until these defects decrease.

A trend of defects of the it-used-to-work-but-is-now-broken variety in a codebase this young are a strong indicator that the engineering teams has poor quality control. Keep a careful eye on such a trend because with a timeline of 8 to 12 weeks, it's very easy to go significantly over budget due to poor quality.

Sidu is a Partner at C42 Engineering. If you liked this post, please consider...


A Better Way To break backward compatibility

I have been using third party libraries and platforms for more than a decade now. I have maintained 10 year old Java systems at one end and upgraded Rail 2 codebases to Rails 4 on the other. After all this, one can't help but appreciate the convenience of having backward compatibility. But nonetheless, there are times when breaking backward compatibility is the right thing to do.

The pros and cons of backward compatibility itself have been discussed innumerable times and developers will keep arguing over it for the foreseeable future. But there is something more important that is seldom discussed. If you have to break backward compatibility, what's the best way to do so?

One of the best examples I have seen for this is RSpec version change from 2.x to 3.x. Due to the overheads and bugs added by RSpec's magical syntax, the core team decided to move to something more explicit. This meant, a fundamental change in how every test would be written. There were structural changes to core concepts like described_class method's behavior. There were syntax change for most of the matchers. In RSpec 2.x, methods like let or subject could be used in after(:all), but no longer in RSpec 3.x. Developers would have been forced to change more than 70% of their test codebase to upgrade to 3.x.

RSpec team made this mammoth task substantially easy by following these principles:

1. Make it easy to find & fix required code changes

Deprecation warning is certainly not a new concept, but few libraries use it well. The crux of a smooth upgrade is planning.

RSpec team started pushing out deprecation warnings 8 months before the first 3.x version was out. 8 months might seem insufficient for someone from Enterprise Java background, but that's respectable in the Ruby ecosystem.

Every warning came with a clear solution which could be used verbatim to fix that particular deprecation. There was a rich feature set around the deprecation warnings management. By default, test executions wouldn't show you duplicate warnings. Because of this, the warnings didn't clutter your console while working. It was just meant to tell you what all syntax would be discontinued in the future releases. It also had a flag for exporting all deprecation warnings to an external file.  Now if you wanted, you could start fixing one warning after another by following the exported warnings file.

The focus was on helping developers identify & fix deprecations. At the same time, their day to day work did not get hampered due to the sheer number of deprecation warnings.

2. Provide tools to automate grunt work

Around the same time, Transpec was introduced. Transpec was a dedicated gem to help developers upgrade from 2.x syntax to 3.x.

It wasn't a regex matching tool that would mindlessly change every should with expect. It used static and dynamic code analysis to provide an accurate conversion. The tool understood the difference between mock variable declared in a test vs mock method. In the absence of this tool, developers would have spent days changing their specs line by line.

This tool also respected developers' coding styles while doing these conversions. It made sure that the indentation, line breaks and use of any other punctuations are not changed. Developers loved it because of this.

3. Make it easy to maintain the code changes

Just because you have fixed all the deprecation warnings in your codebase doesn't mean you are ready for upgrade. May be the major version is still not released or you want to wait for a few months before you see performance metrics in the real world.

It's easy to slip back to old syntax and practice in this period if you are not careful. RSpec helped the teams by providing a configuration that will treat deprecation warnings as errors. This feature makes sure that you are not slipping on the syntax set for RSpec 3.x.

To summarize, people object to upgrades when the cost of an upgrade surpasses its benefits. A well thought through plan and helpful tooling can substantially reduce this cost. With more and more authors embracing this change management, hopefully we will see more people embracing the change itself.

Aakash is a Partner at C42 Engineering. If you liked this post, please consider...