← Back to Envy Labs

Swearing and Consulting

About a year after I founded Envy Labs I put together a 5 minute talk called “Swearing and Consulting” where I went over 17 principles I learned (the hard way) building a consulting business.  

Here’s the New Leaders Service agreement, the URL in the video is outdated.  You should also check out Obie’s services agreement which is available for a fee.

Also, our offices are no longer at CoLab.  We’ve moved 3 times since I created this video.  LOL.

Also, since we created Code School there are some people at our company that do have salaries, since they don’t work on client work (and they aren’t designers or developers).

This video was previously posted on our old Envy Labs blog, but it was lost when we moved over to Tumblr. This weekend I dug up this video, and thought it was worthy of a repost.  

Hope you found it useful.  If you did, please do let me know.


Rake: File Tasks

This is the second in a series on Rake, see previous post for introduction on the Rakefile format and about global tasks with Rake.

In this post we’re going to look at another capability of Rake: file tasks. We’ll cover how create them, how they work, and then create a useful example. But, before we get into file tasks, we need to have a better understanding of another aspect of the Rakefile format: prerequisites.

Task Prerequisites

Any Rake task can optionally have one, or more, prerequisite tasks — also referred to as dependencies. As with any other Rake task, a prerequisite task is only executed if it is needed, and if it is executed it is only ever done so once.

Let’s start by declaring a couple tasks called one and two in our Rakefile:

task 'one' do
  puts 'one'

task 'two' do
  puts 'two'

We can run the tasks in a shell, as we’ve seen before:

$ rake one
$ rake two

Now let’s declare one as a prerequisite for two:

task 'one' do
  puts 'one'

task 'two' => ['one'] do
  puts 'two'

As you can see, we haven’t changed how one was defined at all. But, for two, we added => ['one'] after the task name: this is how you declare a task’s prerequisites. At first this format may seem foreign, but keep in mind, this is just Ruby code; let’s see how our Rakefile would look like if we added in all of Ruby’s optional syntax:

task('one') do
  puts 'one'

task({'two' => ['one']}) do
  puts 'two'

As you can see, in both cases, we’re just passing 1 argument and a block to the task method. For one, we pass in just the task name, 'one'; for two, we’re passing in a hash, {'two' => ['one']}, the hash key is the task name 'two' and the hash value is an array of prerequisites ['one'].

Aside: If your task only has one prerequisite, the hash value doesn’t need to be an array:

task 'two' => 'one' do
  puts 'two'

Let’s see what happens when we run each of those tasks now:

$ rake one
$ rake two

As you can see above, when we ran two, we get the output from both the one and two tasks. Now that we have a foundation for prerequisite tasks to build on, let’s look at file tasks.

File Tasks

Thus far, all of the tasks we’ve created have used Rake’s task method to declare the task, but for file tasks Rake has a special method: file. File tasks in Rake are very similar to normal tasks: they have a name, they can have zero or more actions, they can have prerequisites, and if Rake determines the task needs to be run it will only be run once. Now, the twist is that those things get modified to be file related: the name of the task is the same as the file’s name, Rake determines that a file task needs to be run if the file doesn’t exist or if any of the prerequisite file tasks are newer.

That’s a bit to wrap your head around, so let’s look at some examples:

file 'foo.txt' do
  touch 'foo.txt'

Even though file tasks are meant for dealing with files, you are still responsible for creating the file in the task’s action if the file doesn’t exist.

Aside: Rake includes a modified version of the FileUtils module so that you have access to its methods in your task actions, which is where that touch method above is from.

Aside: FileUtils.touch works like the Unix touch program: updates a file’s timestamps and creates nonexistent files.

And, now when we run that task:

$ rake foo.txt
touch foo.txt   # output from our file task when it's run
$ ls            # showing file was created

Earlier, I mentioned that Rake will not run a file task if the file exists; so let’s see what happens when I delete the file and then run the task twice:

$ rm foo.txt    # deleting file
$ rake foo.txt  # running file task
touch foo.txt   # output from our task
$ rake foo.txt  # running file task again, but no output
$ ls            # showing file was created

As we can see, the first time we ran the foo.txt task we see the touch foo.txt output from the file being created, but the second time we ran the task we get no such output. But, things behave a bit different if we add a prerequisite to our file task:

file 'foo.txt' => 'bar.txt' do
  touch 'foo.txt'

Aside: If the prerequisite for a file task is another file, you do not need to create an explicit file task for the prerequisite, just using the name of the file is enough.

$ ls              # showing foo.txt does not exist
$ rake foo.txt    # running file task
touch foo.txt     # output from file task
$ rake foo.txt    # running file task again, but no output
$ ls              # showing file was created
bar.txt foo.txt

So far, things don’t seem much different: our file task creates the file if it doesn’t exist, and if we run the task again nothing happens.

$ touch bar.txt   # update timestamp of prerequisite file
$ rake foo.txt    # running file task again
touch foo.txt     # output! the file was updated!

Because the timestamp for the bar.txt file was newer than that of the foo.txt file, Rake executes the actions for the foo.txt task.

Useful Example

With this series, I’m trying to show you a feature of Rake, then show a useful example of using that feature, hoping that it’ll spark an idea for how you can use Rake in your normal development process; this post is no exception.

In our Rails applications, we typically have a number of configuration files that are critical for the application to run correctly. But, because these files contain either sensitive information or settings specific to where it’s being run, we do not put these files in source control; instead we usually add an “example” file with dummy data, so those who begin working on our application later know what needs to be set. Well, we can use Rake to simplify the creation of our configuration files from these “example” files.

So, let’s say we want to create the config/database.yml file from the config/database.yml.example file:

file 'config/database.yml' => 'config/database.yml.example' do
  cp 'config/database.yml.example', 'config/database.yml'

Aside: Here we are using FileUtils's cp (short for: copy) method.

$ ls config/                          # showing database.yml does not exist
$ rake config/database.yml            # running file task
cp database.yml.example database.yml  # output from file task
$ ls config/                          # showing file was created
database.yml database.yml.example

Okay, so our task is working properly, and copying the example file as expected. I’m not a huge fan of our task definition as it stands: there’s too much repetition, and we can clean that up. When creating a task, the block that you pass to the task method can also take an argument, which will be the task itself. We can use this task argument to DRY up our task definition:

file 'config/database.yml' => 'config/database.yml.example' do |task|
  cp task.prerequisites.first, task.name

That looks much better, and it still behaves the same. With this task in place, anyone who joins the project can run the task and will then have a config/database.yml to use. If they happen to run it again, nothing will happen; until someone updates the config/database.yml.example file, at which point you can then run this task again and get the latest changes.

That means you can think of these “example” files as templates for the actual files we need. Granted, it would be nice if the task didn’t just overwrite our config/database.yml with the contents of the example file and instead allowed it to merge the two files together; in future posts in this series we’ll be looking at expanding this task to do just that!

I’d love to hear your feedback, especially if you find this helpful or if there’s something you’d like me to cover specifically.

- Jacob Swanner

(Source: jacobswanner.com)


Zen Programming: Lessons From Yoga

At Envy Labs, we believe in creating a healthy and balanced company culture. One way we contribute to a healthy culture is by bringing in an instructor to lead a group yoga practice every Wednesday. Aside from providing a means of midweek stress relief, yoga also makes us better developers, designers, and leaders.


Some of our best ideas come to us when we are free from distractions. The essence of yoga is meditation — finding calm in a sea of chaos. Through the simple practice of intentional breathing, we can train ourselves to filter out the noise in our minds or the shakiness in our muscles.

Code at Sunset


It’s been said that to be a good programmer, one has to be comfortable being uncomfortable. Problem solving can mean failing hundreds of times before finding a solution. Similarly, practicing yoga can mean falling hundreds of times before finding balance. Devotion to practice teaches us that the moment we want to quit is the moment change happens; after enough practice, we learn not to give up, push through that uncomfortable moment, and end up making a breakthrough. 

Creative Process Tweet


Part of developing perseverance is embracing intrinsic motivators, like our desire to learn, produce useful tools, or feel a sense of pride in our work. Yoga reinforces this mindset — to be motivated by improvement, subtle changes, or a sense of connectedness. Since others won’t be able to see the hours of work that have gone into building a feature or advancing a pose, we can’t rely on external approval to motivate us. Yoga teaches us to appreciate and motivate ourselves.

Reducing Judgement

The nature of our work being on the internet means that we see an abundance of successes from our peers and role models. Sometimes, it can be hard to feel confident in this perceived environment of competition. Yoga teaches us to reject competition, meet ourselves where we are, and remember that there is always someone more advanced and more beginner. Just like yoga, programming is a sea of variations and nearly infinite combinations of techniques. If we focus on the journey, this can be pure joy.

Envy Yoga


When we sense competition, it’s easy to become arrogant. Just as yoga teaches us not to compete, yoga teaches us humility. Our neighbors are part of our practice and our community, and arrogance can cause a domino effect in our own work and in the work of those around us. Instead of creating a domino effect of negativity, yoga teaches us to use our influence to help others.

What inspires you to do your best work?

Share your story with us.

-Aimee Simone


Now Hiring: Front-End Developer

We’re on the lookout for an additional front-end developer to join the Envy Labs team. The ideal candidate will have a command over style and markup, while possessing enough design experience to handle adjustments and additions after handoff. That’s a fancy way of saying we want someone who can create interactive experiences.

You’ll be working on a variety of client projects, and helping us build and refine Code School.

What We’d Like to See:

  • A problem solver — someone who loves discovering and applying solutions through design.
  • Able to communicate well with the team, and with clients.
  • Modern CSS + HTML, and their use in large applications.
  • Presentational JavaScript + jQuery familiarity.

Nice to Haves:

  • Comfort working with preprocessors. We use Sass, Haml, and CoffeeScript extensively.
  • An understanding of Git and GitHub flow in a team environment.
  • Experience working in modern frameworks, such as AngularJS, Rails, and Ember.

More About the Position:

Ideally, we’re looking for someone to work in our Orlando office. Compensation is very competitive, and we offer a full benefits package (health + dental + vision included). Learn more about our culture.

How to Apply:

  • We’d love to see some samples of your work — link to your GitHub profile, Dribbble profile, or portfolio. Better yet, pick a project which showcases your best work and tell us about your process and workflow.
  • Link to a blog post or article you’ve written in the past few months on a specific design topic, code technique, or something you’re working through.
  • What’s your favorite HTML tag?

All set? Email the aforementioned materials to: nick@envylabs.com


Mixpanel Analytics with Ruby


Today, we’re going to look at Mixpanel as an example of meaningful analytics. Meaningful analytics help us to solve user acquisition and retention problems. Three percent of visitors signing up for our application is an acquisition problem. However, having only seven percent of signed up users return to use it is a retention problem. Analytics is like a debugger for your marketing and user satisfaction efforts. Events that take place in your software are like the critical breakpoints we set and the properties of these events are like watch variables.

How can Analytics Help Us?

Meaningful analytics take a look at real user events in our software systems to help us diagnose and solve problems, such as:

  • Where do my users stop in a series of steps?
  • How long does it take a user to complete a workflow?
  • Does my copywriting communicate effectively?
  • Do my users keep coming back to use core features?
  • Was a critical action completed?
  • Who uses my application?

On the free service tier, Mixpanel lets you track 25,000 events for free or 175,000 by installing a badge on your application. After you’ve reached the total number of events, old events will fall out of scope like a FIFO queue. Mixpanel has other paid plans to let you keep more data around every month. You can send events to Mixpanel from the client side in JavaScript or from the server side using Ruby. If you choose to use the client side library, events you send will automatically include a number of properties from the HTTP requests themselves such as your browser version, operating system, referrer, and location (if available). Additionally, you can set key value pairs that will be sent to Mixpanel with each request and are referred to as “super properties.” These are stored in a cookie on the client side:

// Super properties are set in a cookie
  'user type': 'free trial',
  'source': 'email campaign',
  'preferred format': 'video',
}, 30);

// They are then sent with every request
mixpanel.track('Code Review');

Funnel Vision

One of the easiest concepts in Mixpanel is that of the funnel. A funnel is a series of steps that you desire your users to take in order to achieve a goal. Let’s imagine a website which allows you to sign up to take an online course. First, an email campaign is sent out to potential users that may attract them to a landing page for the product advertised. At the bottom of that page is a link to the pricing page. If the user finds the pricing agreeable, then they will click the link to the sign up page. Finally, if the sign up form was easily completed, then the user would complete the funnel by signing up and starting the course. Completing this series of steps would also be known as a conversion because a measurable predetermined goal was completed.


Here is a script you can use to generate some sample data in order to experiment with Mixpanel and see how its funnel feature works:

require 'faker'
require 'mixpanel-ruby'
require 'securerandom'

PROJECT_TOKEN = '539d98201fc65b215d25339537e4d945'
tracker = Mixpanel::Tracker.new(PROJECT_TOKEN)

def user_bounced
  @bounced = true

def user_continued
  @bounced == false && rand(2) == 1

users = 10.times.map { SecureRandom.hex } 
users.each do |user| 
  @bounced = false 
  tracker.track(user, 'Landing Page', { campaign: 'Mailchimp Code Reviews' }) 
  user_continued ? tracker.track(user, 'Product Page') : user_bounced 
  user_continued ? tracker.track(user, 'Pricing Page') : user_bounced 
  user_continued ? tracker.track(user, 'Signup Page') : user_bounced   user_continued ? tracker.track(user, 'User Signed Up') : user_bounced 
  if user_continued 
    account = SecureRandom.hex 
    tracker.alias(account, user) 
    tracker.people.set(account, { 
      '$first_name' => Faker::Name.first_name, 
      '$last_name' => Faker::Name.last_name, 
      '$email' => Faker::Internet.email, 
      '$phone' => Faker::PhoneNumber.cell_phone 

What Should I Track?

Now that you know how to track events, the next question is what should you track? Once we see the benefit of analytics our first instinct may be to track everything, but that won’t do you much good.

  mixpanel.track('Wolf! Wolf!  No seriously help wolf yeoooowwww!!!');

When you track too many events without any context to real goals, the result is that events cannot be easily correlated and people will be overwhelmed with information that they cannot necessarily take action on. This will eventually cause your team to stop paying attention to the analytics altogether, thus diminishing its overall value. Mixpanel is really great at letting you break down data by properties:

  mixpanel.track('Predator Attack', { 'Animal': 'Lion' });
  mixpanel.track('Predator Attack', { 'Animal': 'Tiger' });
  mixpanel.track('Predator Attack', { 'Animal': 'Bear' });
  mixpanel.track('Predator Attack', { 'Animal': 'Wolf' });

A general rule of thumb is to track few events with many properties on each event:

# Not real code 
class Application 
  has_few :events

class Event
  has_many :properties

As developers, we like to create elegant solutions that are future proof. Therefore, our next thought may be to create a generic solution that tracks all controller events. At first glance this seems like a logical thing to do. In Rails we have RESTful actions that correlate to real world actions and are immediately recognizable to other developers if we follow the RESTful controller conventions. Another similar implementation idea is to generically attach tracking to ActiveRecord object lifecycle hooks. Both of these ideas are flawed for several reasons.

First, business logic is not always tied directly to controller actions or ActiveRecord object lifecycle hooks. If you practice keeping your code DRY, it is entirely possible that a particular event which you would like to track, is encapsulated by a service object or simple Ruby class and may be invoked in multiple controllers throughout your application. In this case you would end up with duplicate tracking code if you were to tie it directly to all controller actions.

Naming Events

The other problem with this approach has to do with the challenge of naming. Depending on the size of your organization, analytics may require the buy in and understanding of consumers and producers. In a larger organization, developers act as the producers. They define in code the events and properties which will be tracked by the analytics platform. Members of the marketing and sales team may act as the consumers, reading and interpreting the events as they are collected. While the names of controller actions may be perfectly understandable to your developer team, it is likely meaningless to anyone who is not a Rails developer.

tracker.track 'CodeReviews#index'
tracker.track 'CodeReviews#new'
tracker.track 'CodeReviews#create'
tracker.track 'CodeReviewMailer#notify'

Since successful products and organizations tend to grow, we can’t necessarily predict who the consumers of our tracking data will be in the future. That’s why it is important to use human readable event names that can be understood without the context of source code insight.

Tying events directly to controller names also makes the data meaningless as soon as we refactor our application, rename controllers, or switch to a different framework or technology in the years to come.

# No 
mixpanel.track('797128') # Magic Numbers 
mixpanel.track('sessions#create') # Controller Actions 
mixpanel.track('SU') # Insider Abbreviations 

# Yes 
mixpanel.track('User Signed Up') # No need to read code 
mixpanel.track('Begin Code Review') # Plain English names 
mixpanel.track('Social Media Referral') # High Level Tasks 

At this point you may be wondering what kind of events you should be tracking. You need to track your goals. Analytics is all about setting goals, testing and then making adjustments to achieve those goals. If you’re not sure where to start, take a look at Dave McClure’s AARRR! Framework. AARRR stands for Acquisition, Activation, Retention, Referral, and Revenue. You can start by creating one goal for each of those key points. In our example code review service it might be broken out like this:

  • Acquisition: “User Sign Up”
  • Activation: “User Submits a GitHub Repo”
  • Retention: “User Submits a GitHub Repo again within one month”
  • Referral: “User Clicks Social Share”
  • Revenue: “Users Credit Card Charged”

To read more about AARRR checkout the original post here or this Mixpanel specific post about it here.

require 'mixpanel-ruby' 
PROJECT_TOKEN = '539d98201fc65b215d25339537e4d945' 
tracker = Mixpanel::Tracker.new(PROJECT_TOKEN) 

Tracking API

Tracking an event is pretty easy. You specify the Mixpanel user ID, the event name and a hash of whatever extra properties you’d like to add to the event. The user ID is a unique string you must construct to tie events to a specific user. You should not use the simple database ID number of the user in your Rails project for the Mixpanel user_id as that is subject to change. Instead, you should generate a unique number based on SecureRandom or a secure hexdigest of some other unique data.

Mixpanel has the concept of anonymous versus registered users and its API also supports linking the two together when the anonymous user eventually signs up for your service and converts to a registered user. In the JavaScript client many properties of the HTTP request are automatically sent with each event, but in the Ruby version if you want to include these properties you will need to specify them manually.

tracker.track(user_id, event_name, properties_hash)

In order to identify a newly signed up user, use the “alias” method, not to be confused with Ruby’s own alias_method

# Identify a newly signed up user
tracker.alias(user_id, original_anonymous_id)

You can also track information about a user’s identity. This is useful for later contacting segments of your user base within the Mixpanel web user interface by email, phone, SMS, or push notifications.

# Track information about a User's Identity 
tracker.people.set(id, properties_hash) 

Mixpanel offers a specific API call in order to track revenue for a given user.

# Track Revenue 
tracker.people.track_charge(user_id, amount, properties_hash)

Performance Concerns

By default, events are sent synchronously. This is not going to scale well for you because now every time someone makes a request to your web application it will block waiting for the request to Mixpanel’s API to complete. One solution is to send all of your events from the client side using JavaScript. If you are going to use the Ruby client library then you will need to send your events to Mixpanel asynchronously by implementing some kind of background queue like Kestrel, Sidekiq or even a simple text file to store and read them from.

When using the JavaScript library there is a function called track_links() that will send link click events asynchronously and prevent problems with the page being reloaded before the Mixpanel event is sent.

  '#subscribe a', 'User Subscribed', {
  'subscription_plan': 'Code Crafter',

Example Implementation

It’s nice to wrap any analytics tracker up in an abstraction so that you can swap it out if you later choose to implement a different analytics solution:

class Tracker 
  def initialize(user_id) 
    @user_id = user_id 
  def track(event, params = {}) 
    tracker.track(@user_id, event, params) 
  def tracker 
    @tracker ||= Mixpanel::Tracker.new(Rails.env['PROJECT_TOKEN'])

You can also create a module to mix in to any controllers or service objects to give some convenience methods for tracking:

module Analyzable 
  def tracker
    @tracker ||= Tracker.new(user_id)
  def user_id

If you want to support multiple analytics platforms you can use dependency injection to configure the tracking solution on the fly, like this:

class CodeReviewController < ApplicationController
  include Analyzable 
  def create
    report = code_review_service.generate_report(params[:repository]) 
    render :show, locals: { report: report } 
  def code_review_service 
    CodeReviewService.new do |config| 
      # Instance of Mixpanel::Tracker 
      # from the Analyzable module 
      config.tracker = tracker 

Finally, you could have your service objects run the domain logic and perform the event tracking:

class CodeReviewService 
  attr_writer :tracker 
  def initialize(repository, &block) 
    self.repository = repository 
    yield self if block_given? 
  def generate_report 
  attr_accessor :repository 
  attr_reader :tracker 
  def track_report_creation 
    tracker.track(CREATE_REPORT, repository: repository) if tracker 

Mixpanel slices and dices on properties, so be sure to give it a generous portion of properties with a limited amount of events tied to your application logic.


You should also define the event name strings themselves in a common place as constants or an enum because they are case sensitive. Accidentally tracking the same event as separate events due to a capitalization mistake can completely skew your results.

# Three 
tracker.track('Course Started') 

# Separate 
tracker.track('Course started') 

# Events 
tracker.track('course started') 

# (Be careful!) 

Client versus Server

When deciding whether to use client-side or server-side tracking there are a number of advantages to each. Default to client-side tracking because it gives you a whole grab bag of free:

  • free Async
  • free HTTP request Properties
  • More free Time
$(document).ready(function() {

    // Many HTTP Request Properties
    // are already sent by default
    mixpanel.track('Some Event', {
      some_property: 'value',
      some_property: 'value',
      some_property: 'value'


Server-side tracking is beneficial when you want to track actions taking place with shared services or using an API that is presented using multiple front ends. Be sure to include as many HTTP request properties in your events up front as you are able to.

Mixpanel allows you to define funnel, segmentation and retention goals at any point in the future. If you don’t track the data now, then you won’t be able to segment on it in the future.

Extra Help

I highly recommend you watch all of the demo videos on mixpanel.com, they are fairly short and informative. They cover the following really useful features:

  • Funnels
  • Segmentation
  • Retention
  • Live View
  • Notifications
  • Revenue
  • Bookmarks
  • People

For additional help implementing analytics it is important to coordinate with everyone who will be producing and consuming the metrics in order to define goals, track data consistently, and produce understandable and actionable results.

For more information about analytics in general, check out the Lean Analytics Book. If you intend to use a client-side implementation, be sure to check out Segment.io’s analytics.js and its documentation, which abstracts away from specific analytics implementations in case you want to change your provider or support multiple providers in the future.

I’m curious to know what problems you’re trying to solve with analytics. What lessons you have learned along the way?

-Matthew Closson


Automating Development with Grunt

Having recently spent much of my time developing JavaScript applications, one of the things I miss most from Rails is the amount of automation you get right out of the box for improving your workflow. Rake tasks for running tests and managing assets, the Asset Pipeline, and Rails generators are only a few examples. For anything else, there was probably a gem. If not, it was fairly trivial to create my own Rake task.

My immediate goal became achieving this same level of automation in my JavaScript projects, and I found a tool to make that possible.

Enter Grunt


Grunt is a command line tool built on Node.js that automates many of the repetitive tasks we perform when developing applications. Grunt’s massive ecosystem includes plugins for runnings tests, compiling CoffeeScript, minifying HTML and CSS, and more. If we need to perform a task that doesn’t have a plugin, we have the ability to create our own. If you come from the Ruby world, think of Grunt as JavaScript’s Rake.

Installing Grunt

Assuming both Node.js and npm are installed, we can use npm to install Grunt’s command line interface:

npm install -g grunt-cli

This adds the grunt command to our system path.

If our project doesn’t already have one, we’ll need to generate a package.json file in the root directory:

npm init

This file is used by npm to manage our project’s dependencies.

Next, we can install Grunt in our project:

npm install grunt --save-dev

The --save-dev flag adds the npm package to our package.json's devDependencies. Grunt plugins we might want to use later can be installed the same way.

The Gruntfile

Any project that uses Grunt needs a Gruntfile. This is a file called Gruntfile.js (or Gruntfile.coffee) that resides in the project’s root directory and contains the following wrapper function:

module.exports = function(grunt) {
  // grunt code goes here

Inside this function, we configure Grunt, as well as load plugins and define custom tasks.

Using and Configuring a Plugin

One useful plugin for CoffeeScript projects is grunt-contrib-coffee, which gives us the coffee task for compiling CoffeeScript down to JavaScript.

To start using it, install the npm package:

npm install grunt-contrib-coffee --save-dev

Then, load the plugin’s tasks into the Gruntfile:

module.exports = function(grunt) {

Any configuration for our Grunt tasks is passed into the grunt.initConfig method:

module.exports = function(grunt) {
    // configuration

Let’s add some configuration telling grunt-contrib-coffee where to find our CoffeeScript file and output the compiled JavaScript:

module.exports = function(grunt) {
    coffee: {
      compile: {
        files: {
          'app.js': 'app.coffee'

This tells the task to compile app.coffee to app.js.

You can read more about grunt-contrib-coffee’s configuration options on GitHub and about general Grunt configuration in the Grunt docs.

With the plugin installed and configured, we can run the coffee task to compile our CoffeeScript:

grunt coffee

More Automation

While the coffee task is certainly useful, running grunt coffee every time is still very much a manual process. Fortunately, there’s grunt-contrib-watch to help with that. This plugin gives us the watch task, which watches a file for changes, then runs a task we specify when a change is made.

To install the plugin, run:

npm install grunt-contrib-watch --save-dev

And load it in the Gruntfile:


In the configuration, we’ll tell it to watch app.coffee and run the coffee task when a change occurs:

  watch: {
    scripts: {
      files: ['app.coffee'],
      tasks: ['coffee']
  coffee: { ... }

Start watching for changes by running:

grunt watch

Now, if you make a change to app.coffee, it will run the coffee task and compile it automatically!

Creating a Custom Task

Sometimes, we’ll find ourselves needing to automate a task that doesn’t have a plugin. Grunt gives us the ability to define custom tasks using the grunt.regisiterTask method. Just pass in a name, description, and function:

module.exports = function(grunt) {
  grunt.registerTask('sayHello', 'A task that says hello', function() {

And run it with:

grunt sayHello

Wrap up

We’ve only scratched the surface of what is possible with Grunt, but just knowing a little is enough to automate a lot. Head over to the Grunt docs to learn more about Grunt’s more advanced features. As always, you can find this post’s code over on GitHub.

- Matt Schultz


Keeping your YAML clean

It’s not uncommon to find Ruby applications that contain YAML files with unnecessary duplication. Long configuration files make it difficult for developers to effectively maintain application-wide settings.

The YAML specification offers some neat features that we can use to reduce duplication and keep files organized.


In this blog post, I’m going to explain how to use anchors and aliases within YAML files.

Anchors and Aliases

The YAML format offers a simple form of inheritance, which allows new properties to reference values from previously defined properties. This is referenced in the YAML spec as anchors and aliases.

Taken from the spec:

Repeated nodes (objects) are first identified by an anchor (marked with the ampersand - “&”), and are then aliased (referenced with an asterisk - “*”) thereafter.

To understand how this works, let’s review a common Rails app example:

# config/database.yml
  username: root
  password: secret
  log_level: debugger
  database: my_app_development

  username: root
  password: secret
  log_level: debugger
  database: my_app_test

  username: root
  password: secret
  log_level: debugger
  database: my_app_staging

The following example uses the same values for username, password and log_level for all three environments. Let’s re-write the previous example using an anchor and an alias.

First, we define an anchor called base for the values defined in the development group:

# config/database.yml
development: &base
  username: root
  password: secret
  log_level: debugger
  database: my_app_development

Then, we create an alias for that anchor from the other groups:

# config/database.yml
  <<: *base
  <<: *base

This brings all information defined in the base anchor and includes it in test and staging.

Finally, we override test and staging with what’s specific to those two groups. In this case, the value for database is shown as:

# config/database.yml
  <<: *base
  database: my_app_test
  <<: *base
  database: my_app_staging

The final version now looks like this:

# config/database.yml
development: &base
  username: root
  password: secret
  log_level: debugger
  database: my_app_development

  <<: *base
  database: my_app_test

  <<: *base
  database: my_app_staging

This is a much cleaner version that is way easier to understand.

Care for your YAML

Using anchors and aliases is very simple. Although the syntax might feel a little weird at first, it’s easy to get used to and it’s a great way to keep your configuration files under control.

Do you have any other tips that may help keep YAML files clean and maintainable? Please leave a comment and let us know.

- Carlos Souza (@caike)

(photo source: http://www.flickr.com/photos/nels/4346988512)


Moving toward service-oriented design

Building a new application in Rails can be fun, quick and powerful. However, it doesn’t always stay that way. To quote one of my favorite musicians:

Being Joan Crawford at 21 was easy, being Joan Crawford at the end, well that was hard.
-John Vanderslice Letter to the East Coast

The problems start creeping up, if you’re not paying close attention you probably won’t notice that your small web app is becoming a monolithic app.

Top 5 signs you have a monolithic app:

  1. One of your models has a god complex (ten bucks says it’s your user model).
  2. You have a rails-upgrade branch that’s over a month old.
  3. A change to one part of your application is likely to break other parts.
  4. It’s very hard to bring on new developers to your project.
  5. You plan your day around running your test suite.

If you relate to any of these points then congratulations, you probably have a monolithic app. Thankfully, there are many patterns that you can implement to get your app back on track. In this post I’ll describe service-oriented design as a way to mitigate these problems.

Counting the cost

If you have a monolithic app, the first thing you should do is evaluate what this app is costing your business. If you don’t have a business reason to move toward service-oriented design, then don’t. On the other hand, if the pain is shown in real dollars and it’s affecting your attitude about your project then it’s probably time for a change.

Identifying the problem

What is it about a monolithic app that is so painful to work with?

monolithic: (of an organization or system) large, powerful, and intractably indivisible and uniform.

The problem for a monolithic software project is the “intractably indivisible” part. Our application wants to be one thing, and early on it may reward us for being one thing. When our app grows larger, then changing it into something else becomes slow and painful. With service-oriented design we identify parts of our large application that could be their own independent applications, allowing us to easily make updates to each of our systems.

Start small

When thinking about services that could be extracted from an application I start by thinking about URLs. Anything that seems like it would make a good subdomain for your application is probably a good candidate; admin.envylabs.com, account.envylabs.com for example. Obviously you could go crazy and for every controller action you could create a new application like new_user.envylabs.com, but keep in mind that there is a cost to extracting things. If your extracted systems are highly coupled to the rest of the application then the extraction might be more trouble than it’s worth. That’s why finding a piece of your application with a natural separation is the best pick.

Next, you’ve got to decide how your systems are going to talk with each other. The most straight-forward approach with an existing application is to have both apps connect to the same database. You can then extract the views and controllers into the new app and move all the unshared model logic and tests over as well.

Where it gets sticky

Things get complicated when logic needs to be shared between applications. Obviously we don’t want to manage the same code in two places, so just copying it is a bad idea. Either you can extract the logic into a shared gem, or preferably you can draw a line in the sand and decide which app ought to be in charge of knowing that logic and build an API for your other app to access it with.

Aside from the probably familiar technique of using HTTP API endpoints, you could also implement a messaging system to communicate between your applications. If this approach interests you, check out the ruby ampq gem.

Good for more than just monolithic applications

I introduced service-oriented design as a helpful option to manage monolithic applications, but there are many other reasons to move toward service-oriented design. Maybe one part of your app would make more sense in a functional language or maybe it needs to be super fast and should be implemented in a faster low-level language. With service-oriented design you aren’t limited to the same tool to solve all of your site’s problems.

Do you have experience with service-oriented design? I would love to hear what you’ve liked or not liked about it and if you thought it was a good step for your application.


Books every Ruby on Rails Developer should Read

Last week I presented a talk at our local Ruby Users Group on Books every Ruby on Rails Developer should Read. During this talk, I reviewed some of the books that I recommend to new RoR developers.

I put this list together based on my own experience learning the Ruby language and the Rails framework, combined with some great feedback I received on Twitter.


At the end of the presentation, one of the first questions that came up was:

Why didn’t you include the Pickaxe book?

The short answer is that I don’t think Pickaxe is a good book for someone just getting started with Ruby, especially if their goal is to quickly get started with Rails.

Instead, I recommend going through The Ruby Programming Language, which is half the size of Pickaxe and teaches enough of Ruby to understand most Rails applications.

That being said, the Pickaxe book is excellent. It’s the first Ruby book written in english, and I do recommend going through it if you are already familiar with Ruby or if you are not specifically planning on writing Ruby web applications.

I’ve personally read most of the books on the list, except for Eloquent Ruby. This one was recommended to me by most people that I talked to, and after reading a review on Ruby Inside, I figured it was relevant.

Check out the slides for the complete list of recommended books.

Do you have a favorite Ruby on Rails book that you would consider essential for a Ruby on Rails developer? Let us know in the comments!

- Carlos Souza (@caike)


The Future of Online Learning

A few months ago I was invited to speak in Krakow, Poland at an event called Railsberry.  There I presented on the Future of Online Learning (as I see it), and they were gracious enough to record it.  Enjoy!

Gregg Pollack - Future of online learning - Railsberry 2013 from Railsberry on Vimeo.

Some of the sites I mention in the video include:

Do you know of any online educational websites that others should be aware of?  Perhaps using Instructional Strategies in innovative ways?

I’d love to hear about them.

blog comments powered by Disqus