Issue with Rails's restrict_with_exception and counter_cache

When building Rails applications, a lot of focus goes into optimisation. One of those optimizations is using Rails counter_cache columns. The reasoning behind it is simple, you have a relation e.g. Post to Comments and you want to display the comments count for each post. One option is to always count the comments for each post with something like @post.comments.count of course, this is a really expensive operation, and if you are displaying more than one post on the page, it will cause an N+1 issue. Also going to the database just to count how many rows a relation has, isn’t really the best idea.

Luckily Rails has its own optimisation, called counter_cache. It is very simple to include in the model, and Rails manages all of the magic for you behind the scenes. Below is the example of Post and the cached comments count. The only prerequisite that you have to do, is to add a comments_count integer column, with default value of 0.

#in a migration
def change
  add_column :posts, :comments_count, :integer, default: 0

class Post < ActiveRecord::Base
  has_many :comments

class Comment < ActiveRecord::Base
  belongs_to :post, counter_cache: true

This way, via the Rails Magic™, you will get the count column incremented whenever a new comment is posted. And decremented when a comment is destroyed (if you support such a thing). Now you don’t have to count all of the comments when showing the number next to the post, which will be a pretty big performance boost (relatively speaking).

That is the first part of the equation, as the title clearly states this post has something to do about restrict_with_exception. That is the dependency management type, which disallows you to destroy a record that has child records, with that orphaning those records, and creating database inconsistencies. In our case, a post with comments.

We have to modify the Post class to accomplish this:

class Post < ActiveRecord::Base
  has_many :comments, dependent: :restrict_with_exception

Now every time you call destroy on a Post model that has comments, Rails won’t let you do that, and it will raise an exception, which is the desired behaviour.

Now, we avoid rails and use regular SQL when the situation needs us to. Let’s say moving comments from one post to another. It can be done with

@post.comments.each do |comment|

But for a post with a lot of comments this operation can be too slow. Moving comments with a sql update is much easier and faster way to accomplish the same thing, but not without it’s own pitfalls. @post.comments.update_all(post_id: will update your comments and set them to the new post, and you will probably want to refresh the counter_cache on the @new_post, so you have to do something like this Post.reset_counters(, :comments).

Now if you forget doing the same thing on the old post, and just try to destroy it, you will fins yourself chasing ghosts. Rails somehow takes the counter cache column as an authority, before even looking if there are associated records there. This is a cool performance hack, and by going only through Rails, and not pure SQL, this situation wouldn’t happen at all.

Sometimes we can’t or don’t need Rails call backs and validations, just to transfer the ownership from one model to another. Then weird things start to happen, and you don’t know why, so you lose a day debugging what should be a simple thing.

Dynamic components in Ember.js

When building Ember.js apps you will try to avoid code duplication as much as possible. One of the ways of doing that is using Ember components. Components are small, reusable pieces of code, that share a common functionality. For example, you can have a component that shows a user’s Twitter feed, that receives the twitter handle as a parameter.

Components are the new hotness and the way to go with Ember.js 2.0 and they are also being introduced and used in HTML5. Ember components are closely built to adhere to the W3C specification so they should be compatible when the component adoption increases.

Sometimes you want to render a different component in the same place, for example a user’s image and contact info, which can be different for a person and an organisation. So to avoid ugly conditionals in handlebars code there is an option to show components dynamically with the convenient component helper.

You won’t avoid having those conditionals of course, but you won’t have to put them in your handlebars templates, which should be as logic-less as possible. They can reside at the model level, as a computed property. I will simplify this even further and use an example where we will receive the value from our API.

We will be using a simple polymorphic relationship in our API and values returned from ownerType will be “user” and “company” for this example.

Of course we want to show different data for the user and the company, but because we are showing them in the same place, we would have to use a conditional in handlebars to show them. And or model (or controller) should return the isUser boolean. This approach will make future code maintenance a nightmare.

// app/models/owner.js
import DS from 'ember-data';
import Ember from 'ember';

export default DS.Model.Extend({
  name: DS.attr('string'),
  ownerType: DS.attr('string'),
  // ...
  isUser: Ember.computed.equal('ownerType', 'User'),
  // ...
{{#if model.isUser}}
  {{owner/user-widget owner=model}}
  {{owner/company-widget owner=model}}

As you can see, this conditional isn’t really the best one imaginable. If we add another ownerType possibility, we will have to add an additional component (which is the only thing we should be doing), add another boolean on the Owner model, and make the conditional above even more complex than it is. No one probably wants to be remembered by code like this:

{{#if model.isUser}}
  {{owner/user-widget owner=model}}
  {{#if model.isOrganisation}}
    {{owner/organisation-widget owner=model}}
    {{owner/company-widget owner=model}}

There clearly must be a better and cleaner approach. Ember component helper comes to the rescue. And with some clean coding we can avoid all of the conditionals mentioned here.

// app/models/user.js
// ... 
   widgetComponentName: Ember.computed('ownerType', function() {
     return `owner/${this.get('ownerType').toLowerCase()}-widget`;
// ...

This way, the model returns a convenient and usable string when we get the displayComponentName property from it. Now we can use this and have the cleanest possible handlebars template:

{{component model.widgetComponentName owner=model}}

Component helper will resolve the correct component and render it. If a new owner type pops up, the only thing we need to change is to add the widget component for that owner, which is also a great example of separation of concerns, and we are reducing code churn a lot. To reduce the code duplication and complexity you can also take use inheritance to combine similar functionality under one object, and extend it with specific code for each object. I’ve written a blog post about it called Simple Inheritance with Ember.js that you should definitely check out.

Send PDF attachments from Rails with WickedPdf and ActionMailer

In almost any web application you create, the question of generating PDF files will pop up pretty soon. There are a couple of options while using Ruby on Rails. First one is Prawn which is a pure PDF generator library, and the other options are mostly wrappers around the very popular wkhtmltopdf unix library that converts html pages to pdf. There is also an option that uses PrinceXML but I would initially exclude it because it is pretty pricey for an SME.

While using Prawn gives you all of the formatting power when creating PDF files, it’s learning curve is pretty steep, and the option of converting html to a PDF seems pretty good. This is especially the case when you already have html templates that only need some small modifications to be able to convert them to PDF.

There are a couple of gems that wrap around the wkhtmltopdf library, and ruby-toolbox pdf generation section shows some more viable options that can be used when generating PDF files from ruby.

I’ll be using WickedPdf in this example, as that is the one I have set up this system with a couple of times already, so I’m pretty confident it will work in any other Rails application.


As always, using a ruby gem in rails is pretty simple, you just add a couple of lines to the Gemfile

gem 'wicked_pdf'
#we need the new binary here, so that we can be OS independent
gem 'wkhtmltopdf-binary', github: 'pallymore/wkhtmltopdf-binary-edge', tag: 'v0.12.2'


This setup will work pretty straightforward in the controllers, because WickedPdf registers :pdf request format, and you can respond to it in the same fashion as html or js in a respond_to block. Code below is copied from the WickedPdf Readme page.

class ThingsController < ApplicationController
  def show
    respond_to do |format|
      format.pdf do
        render pdf: "file_name" # Excluding ".pdf" extension.

That part was pretty easy, and you can configure the pdf render with a lot of options that are mentioned in the advanced configuration section of the WickedPdf Readme

Generating PDF from ActionMailer

The issue when trying to generate the PDF from ActionMailer is that there is no request, so you have to render the html in a different way. In a nutshell, you want to render the template to a string, and then forward that string (html) to the wkhtmltopdf. This is the same way the gem work when rendering pdf from the controller, but it is worth mentioning once more.

We will use a method that exists on the AbstractController::Rendering and it is called render_to_string So we can do something like this in the mailer:

class TodoMailer < ActionMailer::Base
  def pdf_attachment_method(todo_id)
    todo = Todo.find(todo_id)
    attachments["todo_#{}.pdf"] =
      render_to_string(pdf: 'todo', template: 'todo.pdf.erb', layout: 'pdf.html'), { :hash_with_wickedpdf_options }
    mail(to:, subject: 'Your todo PDF is attached', todo: todo)

By using this approach, we can easily send a PDF attachment from an ActionMailer method. You can do this for any Rails template you want, but be sure to test everything before putting it into production.

Simple inheritance with Ember.js

When doing Object-oriented programming, we are always seeking ways to reduce the duplication in the code we write. Reusing existing code is something that every programmer wants to do most of the time. Because, we have to admit it, we are pretty lazy. But that laziness brings us great stuff like frameworks we use in our everyday jobs.

Using Ember.js and ember-cli of course, there is an easy way to remove code duplication. Ember.js has a nice way to extend every existing class, you are already programming apps in Ember that way, by extending Ember.Route, Ember.Component, Ember.Controller and a lot of other stuff in the framework.

Realising this nice thing will help us on our way of achieving maximum code reuse. Let’s say we have a component bird

// app/components/my-bird.js
import Ember from 'ember';

export default Ember.Component.extend({
  name: 'Birdie McBird',
  actions: {
    fly: function() {
      alert("I'm flying, la la");
    speak: function() {

And we want to implement some of the same actions in the duck component, because all ducks can fly, but they just sound different, so we would do it like this:

// app/components/my-duck.js
import MyBirdComponent from 'my-bird';

export default MyBirdComponent.extend({
  name: 'Ducky McQuack',
  actions: {
    speak: function() {
    waddle: function() {
      alert('Waddle waddle!');

Of course, we have to define our templates, and link the actions inside them, so here it goes:

<!-- app/templates/components/my-bird.hbs -->
<h2>Hello, my name is {{name}}!</h2>
<button {{action 'speak'}}>Make a sound</button>
<button {{action 'fly'}}>Fly birdie fly</button>

<!-- app/templates/components/my-duck.hbs -->
<h2>Hello, my name is {{name}}!</h2>
<button {{action 'speak'}}>Make a sound</button>
<button {{action 'fly'}}>Fly birdie fly</button>
<button {{action 'waddle'}}>Make a move, duck</button>

<!-- app/templates/application.hbs -->

In Ember 2.0 you will be able to reference components with the angle bracket notation, just like any other regular html tag. I’ll write about those new features sometime in the future.

If you run the example, you see that we are reusing the speak method and redefining everything else, which is great. This is a really simplified example, and done with a component. You can do the same thing with controllers, models, even routes.

I’ve made a small demo on JSFiddle so you can try for yourself. Of course, it’s a globals based Ember app, but it’s the same principle as described here. And because ember-cli is the default way of developing Ember.js apps nowadays, I’ll keep my ember posts using it.

Notify an Ember.js app that a deployment has happened

The approach with refreshing regular web applications after deployment is pretty straightforward, you click on a link, it goes to the server, server renders a page. If the deployment has happened in the meantime, server renders you the current(new) version of the page. The issue with Ember.js is a pretty nice example of how the old approach doesn’t work anymore. You get the whole application when you first request the page, then everything happens in the browser. The only thing that goes between server and the client is the JSON being passed here and there.

Luckily we are already using Sam Saffron’s awesome gem message_bus, to notify the front-end that an update has happened in a model we are working on. So I decided to implement deployment notification using a similar approach.

First, we want to be able for the Ember app to know that a deployment has happened, so registering MessageBus and subscribing it to a channel is the first thing we will do. We chose the ApplicationController for this in Ember, because it is always loaded, regardless of the route. We are using ember-cli-growl component to provide us with growl like notifications, so you might need to adapt the code to suit your needs.

/* global MessageBus */
import Ember from 'ember';

export default Ember.Controller.extend({
  initMessageBus: function() {
   MessageBus.callbackInterval = 50;
  subscribe: function() {
    var channel = '/deployment';
    var controller = this;
    MessageBus.subscribe(channel, function() {'A new application version has been deployed, please reload the browser page.', { clickToDismiss: true });
  init: function() {

In Rails, include the message_bus gem in the Gemfile, bundle install and run the console. Now you can try running MessageBus.publish('/deployment', 'foo') and you should see a notification pop up in your ember app. Ok, but we need to be able to call it somehow, and what better way to do it than using Rake.

desc "notifies Ember that a deploy has happened"
task notify_ember: :environment
  MessageBus.publish('/deployment', '')

Now you only have to call that task on the server when your deployment has finished and everything will be fine and dandy. There are a lot of options to do this, depending of your deployment strategies, and infrastructure. We are using Capistrano, so a nice after hook, running after the app server has restarted did the job quite well. If you are having trouble setting it up, feel free to ping me in the contact form.

Speed up Rails on Heroku by using Rack::Zippy

I really like Heroku for fast deployments and everything they offer for a small Ruby on Rails or node.js apps. It’s so easy to deploy a web app there, and it’s pretty much free unless you are doing some serious stuff, by when you should already be paying either Heroku, or have your own dedicated solution when you’re all grown up.

There is one thing that I personally dislike about Heroku, and that is their asset serving. As we are all sitting behind a desktop(laptop) computer, and have pretty good network connections, we don’t see how slow the app is for someone on 3G mobile network, or something even slower. And in a well written app, the main culprit for large data transfers are the assets.

We have a lot of JavaScript, Css and images in our apps, because we want to make them beautiful, but that takes it’s toll on network bandwidth. Luckily most browsers support compressed assets(js and css) and know how to decompress them. And when you build your own server, there is a very high probability that assets won’t be served through Rails at all. You will either use NGINX or Apache httpd to act as a reverse proxy, and also serve your assets for you. The web server is smart enough to serve compressed assets to browsers that support them, and uncompressed to others.

Rails has a pretty good ActionDispatch::Static rack middleware that does all the work for us while in development, or on heroku by changing a config flag, or using their rails_12factor gem in production. To get it running you have to change the static flag inside the config/environments/production.rb

 # rails 4.1 and below
 config.serve_static_assets = true
 # rails 4.2 and above
 config.serve_static_files = true

For those on a budget, and not able to afford to serve assets from Amazon S3 or another CDN (will update here when my post on how to do that is finished), there is an option on serving the compressed assets from rails. They are already generated by heroku on deploy, but just aren’t being served, which considerably increases your initial page load time, and every additional one, if the assets aren’t being properly cached. ` You can solve this pretty big issue with including one gem that will serve compressed assets for you. It’s called Rack::Zippy and it’s really easy to configure. You have to add it to the Gemfile first:

gem 'rack-zippy'

Run the bundle command and create the initializer for Rack::Zippy in config/initializers/rack_zippy.rb like this:

Rails.application.config.middleware.swap(ActionDispatch::Static, Rack::Zippy::AssetServer)

This will use Rack::Zippy to server gzipped assets instead of uncompressed ones and really reduce your initial load time, especially on slow connections. It also adds pretty sane cache expiration headers which can also help with the browser caching.

A not so funny orphaning issue or Rails's has_one relationships

If you are using ActiveRecord and keeping your models pretty thin and properly normalized, you will surely reach for the very nice has_one relationship. It’s basically a one to many relationship, but Rails thinks for you and keeps only one child model available for use. It’s pretty easy to set up, given that you have two models, let’s say Post and Post::Metadata

class Post < ActiveRecord::Base
 has_one :post_metadata
class Post::Metadata
 belongs_to :post

OK, this part is pretty straightforward, you create your records either by using Post::Metadata.create(post_id: 42, other_attrs) or by using Post.find(42).create_post_metadata(attrs). If there already is a has_one child record, and you create a new one, the old one will be orphaned, which is a default option, and you should be thinking of it before you start messing around. This can also be the desired option, maybe you are storing some photo urls that aren’t being used anymore, and have a callback on them to destroy the record after you remove the file from the third-party storage. There is one tiny kink that cost me a few hours of life today, although you create a child record in a has_one relationship, the mere action of building an association (just instantiating it, not creating) is enough to orphan the previous record, without it rolling back if the newly built record is destroyed.

Automated testing will add value to your software project

We have read enough about TDD and it’s demise in the last year. Since David published his post about how TDD is dead there have been a couple of flame wars concerning TDD and testing in general.

I believe that TDD is a good thing, but I don’t always practice it, as sometimes you don’t have the time to do it. I know, some of you reading this will say that there you must make time for TDD and that TDD is the only way. Maybe you are correct, but in a startup world there is rarely any time for testing at all.

With deadlines and churning new features each week, one can’t make the time to do proper TDD. And sometimes it seems that TDD is some relic from the past, from the really distant past. There is a nice report called Why Most Unit Testing is Waste that sums it up fairly good.

However, I believe in automated testing, at least having a full integration suite, following the application happy path, and any edge case you find later on. Also I’m not against unit testing, if it makes sense. Payment processing code, of course you will test it. Some code deciding if the label class is blue or red, well, you can probably skip that test if you have no time to write it. Rails controller tests are a great example of procrastination in tests. I don’t have anything smart to work on, let’s write a couple useless controller tests.

Unit testing external libraries is another thing, they should be properly tested, to ensure that their API behaves as it claims. Especially if your library is public, then you have to test it.

Having a thorough test suite increases the application value, and decreases the breakability because any subsequent change you make on an application that isn’t tested is like walking through a minefield. You never know what will break.

I would have liked that I learned this lesson the easy way, by listening to other people having issues when some of their code wasn’t tested and they had to change just one little thing, and something completely unrelated broke. However, that wasn’t the case, I learned it the hard way. With a really bad client, who constantly changed their mind about features (another red flag) we were implementing a lot of stuff, and changing it on a daily basis. Having no tests meant that you expected something else will break after deploy, because you just don’t know what can go wrong.

After I got burned by that, I started writing tests, trying to do TDD, but at least covering the process with integration tests as I went along. And it helped a lot, the sheer confidence when deploying the app that nothing will go wrong is really enough. And the client is better off in the long run, because there is no chance that the code breaks, and no one notices it.

Lesson learned: Don’t obsess with TDD or the proper way to test, but try to test the code as well as you can, have an integration/acceptance suite that you run before deploying, and try to cover as much of the app as possible with it. Don’t overdo it, and don’t test the language or framework you are using. Shaving Yaks is really fun sometimes, but don’t do it on a production application, because someone will read it later on, and think that you have to test every little thing.

Automating personal and business workflows

In an average month, there is a lot of stuff you will have to do. Stuff like paying the bills, taking care of inventory for home/company, budgeting, and your day to day work.

The best thing I have found to do it is to automate as much work as possible. I’m not only talking about programming something to do the work for me, but to write down all the processes that I’m using in my work and life.

By writing everything down, you are able to generate check lists, for every process. Although this sound pretty boring and lame, the alternative is panicking when you’ve forgotten to take the health insurance card to your vacation. Or your drivers licence for example. Or kids (OK, I sincerely hope that this one only happened in that movie) :).

Having a printed out check list does wonders when I have to set up a new server environment from scratch, although everything is scripted, there is always manual work involved. Checking each box gives you confidence that everything is working as it should be. If a process fails though, you are able to retrace the steps, figure out what went wrong, and fix the step, or add a few in between.

When you document everything, it’s a great time to hire someone to do the menial work for you. You don’t have to waste your precious time by paying the bills, or ordering water, or doing anything except the one thing you do best, whatever that is. That thing got you to where you are now, but most of us have businesses to run, and that also takes it’s toll.

Tolls don’t matter here, start with pen and paper, then see if anything else fits better. I use Markdown and ia Writer Pro to accomplish my goals, to write the process down, but lately I’m experimenting with something bigger, as I want to focus on writing more.

Don't obsess with analytics too much

If you are only an occasional writer, as I surely am, you don’t have to look at the analytics screen all the time. It’s a really nice thing to look at the real time data when you publish a new post, and advertise it on Twitter and other social networks.
But that one is a waste of time. Real time data doesn’t mean anything real. The main reason is that aside from satisfying your own weird obsession, you are not accomplishing anything that will help you with your goals in life.
The way I fight with it (when I eventually write and publish something), is scheduling it to be published in the future. When I set and forget, the urge to look at real time analytics data vanishes, because if I’m not personally involved in pressing the Publish button, then I’ll most probably forget that it’s being published at the moment, and work on something else.
As the best time to publish depends on many other factors, I tend to publish when I’m even not sitting in front of the computer. I’m trying to set a dedicated time to write each day, and it will take me some time to accomplish that. But your publishing time should always be the same.
I’m not advocating on complete analytics denial, because analytics are one of the best investigation tools you have in your work. You can see what posts people are most interested in, and focus on those topics, instead of those not being read much.
In a business setting analytics and lead tracking is really crucial, but real time data won’t help you with anything. Once someone is visiting the site, then it’s too late to change anything, so relax, don’t fuss about it and have fun.
Spring is coming, go outside and catch some sun, it’s good for you. And it’s much better than constantly looking who is on your website right now.