Why rails sucks

Why rails sucks

Edit 2016-04-08: Lessons learned after 48 hours

So I got more feedback to this than I anticipated, mainly via reddit.
Some in the predictable “you’re ugly” form, but a lot of genuinely good suggestions on how to avoid / go around some of the pain points I describe in this post.

I think the main thing was that several people pointed out that it’s completely possible to have your models as POROs, and then use ActiveRecord (or alternatively, something like Sequel) for data access only.

To be perfectly honest, I’m not 100% sure what that would look like, and whether it would be as convenient as I’d like, but it would definitely be an improvement over sticking everything in an ActiveRecord class.

This solution would also greatly help with the issue I have with unit testing, enabling me to decouple those from the database, and definitely cut down on their runtime.

In conclusion – I wouldn’t say that rails is terrible / should not be used.
I would definitely say that it has some big traps which are harder to avoid than I’d like.

If I had to re-write this post today, I’d definitely change its title to “Some of Rails’s biggest gotchas”, and I wouldn’t say that you should categorically not use it.
You just need to use it carefully.

Thanks for reading, and for teaching me quite a few new things.

Here’s the original post:


Lately I’ve had the chance to work on a large-ish server application written in RoR. I went from “Wow look at all these cool conventions, I know exactly where everything needs to go!” to “err.. how the fuck do I scale this?” and “This is not where this should go!” in 8 weeks. And this is (partly) why:

(* Yes, this post’s title is a total clickbait. I don’t actually hate Rails; I just think it promotes a lot of bad practices.)

ActiveRecord sucks

I’ve never personally used the ActiveRecord pattern in previous projects; I always felt this would mix up domain concerns with persistence concerns, and create a bit of a mess.
Well guess what, I was right.

In the specific project I was working on, the code for domain classes would typically consist of 70% business logic, and 30% stuff to do with DB access (scopes, usually, as well as querying / fetching strategies).
That in itself is a pretty big warning sign that one class is doing too much.

The arguments as to why ActiveRecord in general is a bad idea are well documented; I’ll briefly recap here:

  1. It breaks the Single Responsibility Principle

    A model class’s responsibility is to encapsulate business rules and logic.
    It’s not responsible for communicating with data storage.

    As I mentioned before, a considerable amount of code in our project’s domain classes was dedicated to things like querying, which are not business logic.
    This causes domain classes to be bloated, and hard to decouple.

  2. It breaks the Interface Segregation Principle

    Have you ever, while debugging, listed the methods of one of your domain objects? Were you able to find the ones that you defined yourself? I wasn’t.
    Because they’re buried somewhere underneath endless ActiveRecord::Base methods such as find_by , save , save! , save?, save!!!, and save?!.

    Well, I made up a few there, but ActiveRecord::Base instances have over 100 methods, most of them public.

    ISP tells us that we should aspire to have small, easy-to-understand interfaces for our classes. Dozens of public methods on my classes is another indication that they’re doing waaaay too much.

  3. Its database abstraction is leaky

    Abstracting-away the database is notoriously hard. And I believe that ActiveRecord doesn’t do a particularly good job of this.

    As noted before, ActiveRecord::Base pollutes your public interface with a plethora of storage-related methods. This makes it very easy for a developer to make the mistake of using one of these very storage-specific methods (i.e column_for_attribute) inside a controller action, for example.

    Even using reload or update_attribute indicate that the using code knows a little too much about the underlying persistence layer.

“Unit” testing

If there was one thing I knew about Rails before having written a single line of ruby code, it was that everything in Rails is unit-tested. Rails promotes good testing practices. Hooray!

So, obviously, one of the first things I read about concerning RoR development, was how to test:

Testing support was woven into the Rails fabric from the beginning.

Right on! Finally, somebody gets it right!

Just about every Rails application interacts heavily with a database and, as a result, your tests will need a database to interact with as well. To write efficient tests, you’ll need to understand how to set up this database and populate it with sample data.

Say what?
I must have misread this.. let me check again..

your tests will need a database

Yup. I need a bloody database to test my business logic.

So why is this so bad?

The meaning and intent behind unit tests is to test single units of code.
That means that if the test fails, there can only be one reason for it- the unit under test is broken.
This is why you fake everything external that the unit under test interacts with; you don’t want a bug in a dependency to cause your current unit test to fail.

For example, when testing the Payroll class’s total_salaries method, you use fake Employee objects, with a fake salary property, which would return a predefined value.
That way, if we get a wrong total_salaries value, we’ll know for sure that the problem lies within the Payroll class, and nowhere else.

But, with rails testing, you’re not encouraged to fake anything.
That way, if the total_salaries test fails, it can be because Employee is broken, or my database schema is wrong, or my database server is down, or even something as obscure as a child object of Employee has some required attribute missing, so it can’t be persisted, and an error is thrown.

This is not how a unit test is supposed to work.

Not only does Rails encourage you to write non-unit unit tests, it also makes it nearly impossible, and very dangerous to go around it and write proper unit tests.
(* Note that if you use hooks, such as before_update etc., it becomes even more horrible)

Apart from the horribleness of making it harder for me to determine what went wrong when a test fails, this complete abomination of a testing strategy caused me some more hair-tearing moments:

  1. Our unit test suite took 18 minutes to run. Even when using a local, in-memory database.
  2. My tests failed because the database wasn’t initialized properly.
  3. My tests failed because the database wasn’t cleaned properly by previous tests (WTF?).
  4. My tests failed because a mail template was defined incorrectly.
    Since sending an email was invoked in a before_create callback, failing to send an email caused the callback to fail, which caused create to fail, which meant that the record was not persisted, which meant that my test was fucked.

Too much magic

Magic is something that is inherent to any framework; by definition, if it does some sort of heavy lifting for you, some of it is going to be hidden from your view.

That’s doubly true for a framework which prefers “convention over configuration”,
meaning- if you name your class / method the correct way, it’s going to be “magically” wired up for you.

This kind of magic is fine and acceptable. The magic that I have a problem with is rails’ extensive usage of hooks (aka callbacks); Be it in the controller (before / after action), or in the model (before / after create / update / delete…).

Using callbacks immediately makes your code harder to read:
With regular methods, it’s easy to determine when your code is being executed – you can see the method being called right there in your code.
With callbacks, it’s not obvious when your code is being invoked, and by whom.

I’ve had several instances of scratching my head, trying to figure out why a certain instance variable was initialized for one controller action and not for another, only to track it down to a problem with a before_action callback of a parent class.
The fact that ActiveRecord callbacks can’t be turned off is also a pain in the ass when testing, as I described previously.

Additionally, callbacks are, of course, very hard to test, since their logic is so tightly coupled to other things in the model, and you can’t trigger them in isolation, but only as a side effect of another action.

This is the reason why some rubyist recognize that callbacks are at least problematic, if not to be avoided, while others prefer to implement node’s Express in ruby, rather than use Rails controllers.


I like the idea behind Rails. Convention over configuration is great, and I also totally subscribe to the notion that application developers should write more business-specific code, and less infrastructure code without any business value.

The problem with Rails isn’t that it’s an opinionated framework.
The problem is that its opinions are wrong:

  • Tying you down to a single, err, “controversial” persistence mechanism is wrong.
  • Making it impossible to do proper unit testing is wrong.
  • Encouraging you to do things as side-effects rather than explicitly is wrong.

When it first launched, Rails was revolutionary:  it was the first to offer such comprehensive guidelines, and support, to create your application in a standard way.
However, it seems that today, our standards of what is ‘correct’ or ‘recommended’ have changed, while Rails has stubbornly remained where it was 10 years ago.

You can still create a good application using Rails.
It’s just that it doesn’t  allow you to create a great application.

Hey, I’ve got an opinion about the NPM / kik debacle too!

Hey, I’ve got an opinion about the NPM / kik debacle too!

Learning about this incident and its consequences, as well as the problems it highlights, really got me thinking about NPM, OSS and how I take certain things for granted.
So I thought I’d add my own worthless 2 cents to the discussion, and give my opinion on some of the issues raised.

Firstly, let me just congratulate some quick-thinking bloggers and thinkers who, in these difficult circumstances, under pressure, found a way to attach the suffix ‘-gate’ to this controversy. Well done guys.

If you don’t know what I’m talking about, a guy called Azer rage-quit NPM, because a company called kik seized the package name ‘kik, which he was using. The removal of one of his packages caused a million dependent packages to not be able to install, sending NPM Inc. into a panic.

Let’s start with some thoughts on the human aspect

Azer’s language in his correspondence, as well as rage-quitting in response to NPM’s actions, is quite childish

Personally, I would have recognized that kik (the company) has some legit claim to that name, and at least tried to negotiate some agreement with them.
Unless kik was your late grandmother’s nickname, and you’re very personally attached to it, there’s no reason why you wouldn’t be willing to part with it, for a reasonable compensation.

However, we mustn’t forget that this guy wrote and maintained these packages free of charge. That means that he doesn’t really owe anyone anything;
He could’ve decided to unpublish his stuff based on the fact that he disapproves of NPM’s CEO’s new hairstyle, and still be within his rights.

His motives here are of the least importance. He just serves as an example of the bigger issues within NPM.

Yes, kik have been, as Azer put it, dicks

You can’t say “We’re sorry for creating any impression that this was anything more than a polite request” when you’re repeatedly threatening with lawsuits.
Also, that open source package wasn’t in any way competing or pretending to be kik messenger. And in the end, they didn’t even take up the name! what dicks.

As a software company, which, I’m sure, uses quite a lot of FOSS, I would expect a little more respect to an open-source contributor.
Startup-timelines wrote about this in more detail.

Undoubtedly, the worst thing about this is NPM’s behaviour

I’m not talking about re-publishing (or un-unpublishing) the left-pad package; if the project’s license allowed for that (AND the new maintainer agreed)- then it’s perfectly reasonable to do so.
I’m talking about the fact that NPM is trying to be a repository, a home, for open source projects.
It makes its livelihood providing access to open source libraries, and thus is dependent on people like Azer. As such, it should have been much more careful in handling this situation.

Even if they’re technically right, according to their policies, they really can’t afford to antagonize the community.
And nothing would antagonize open source folk faster than surrendering unconditionally to corporate lawyers.

So what could they have done better?

  • Follow their own goddamn guidelines, which state that they’ll intervene in cases such as this: ‘Alice works for Foo Inc, the makers of the critically acclaimed and widely-marketed foo JavaScript toolkit framework. They publish it to npm as foojs, but people are routinely confused when npm install foo is some different thing.
    Well, kik messenger never even published their package, so how can you claim that people ‘are routinely confused’?
  • Give Azer some time to shut down his project gracefully instead of seizing it immediately
  • Offer to change ownership of the package under a condition that Azer be compensated for his troubles

..or any number of other courses of action that would show a bit more empathy to the package author, instead of treating him like a criminal.

Now, it’s conceivable that NPM employees didn’t deem these actions necessary, as they did not anticipate the shitstorm that ensued. That’s understandable.
However, refusing to acknowledge that they did wrong even after the fact, in light of the flack they received, is a big red flag, and an indication that they are disconnected from the community they depend on.

Like many have stated before, the whole model of micro-modules and open-source software is based on trust. But it’s not only package users who need to trust package authors; Package authors need to trust NPM Inc.
They need to be able to trust this company to treat their code in a fair way, and they need to feel that NPM has their backs.
Handing over the rights to the package name without hesitation, and failing to address community concerns in the aftermath is not conducive to that.

Even if their actions are completely legal / in accordance with policies, trust doesn’t work that way.

Technical aspect

This whole story highlighted some glaring problems with the node / JS community:

Why on earth do we need a dependency in order to pad a string?

or to check if something is an array?
As Haney correctly points out in the blog post above, a function is not a module, and should not be treated as such.
I’m all for using utility modules (such as lodash, jQuery and such) where you need them, but a single function is not a module. Just copy it into your own damn code.
It seems that years of browser compatibility issues have turned javascript programmers a little paranoid; Here’s a personal example-

Recently, I did a little personal front-end project. Since I didn’t have a whole day to spend on just setting up a build chain for a ‘proper’ JS framework, I thought to try vanilla javascript instead.
Now, I’ve done around 2 years of development in AngularJS, and in jQuery before that.
I hadn’t interacted with the DOM using native JS in I don’t know how long.
I had a notion that the APIs were messy, not well supported, that anything beyond the ‘$‘ function was ‘here be dragons’ territory.
Well, it turned out that it was (almost) just as easy for me to do what I wanted to do in the DOM using plain ol’ js as it was with any of these frameworks.
And it worked on all browsers (* IE is not a browser. If you need to support it, it’s perfectly easy to find polyfills for it).
Previously, using “A Framework” whenever I needed something done in the browser was a no-brainer. Now, I’d actually need to justify doing that.

How the hell can you allow someone else to re-publish an abandoned package?

If there had been a quick-to-react malware author who would have picked up one of these abandoned packages, they could have published a virus as version 1.0.0 of left-pad, and anyone who required it with a non-specific version would have been vulnerable.
The sensible thing to do would be to block anyone from picking up abandoned package names, until NPM can verify their proposed package.
I’m happy to see that they plan to do that now, but really, it should have been there from day one.

In any case, it made me even more aware of the importance of shrink-wrapping.
Actually, I don’t see why NPM doesn’t do that by default, like ruby’s bundler.

Our release process depends on some 3rd party service

Or, as a reddit commenter put it ‘if you have to go to the internet in order to build your application, I pity you‘.
This was a big realization for me; I just always took it for granted that npm install just works.
But what if NPM is down? or a package is missing? or my corporate firewall has been updated to block npm.org?
I can’t release now?? that’s ridiculous.
You could use local registries, cache your dependencies, or even bundle your dependencies.
That’s an operational consideration that we need to be aware of. I wasn’t.


Better not trust rely on NPM Inc. and its packages

As a for-profit organization, by default their higher interests are not necessarily those of the community.
That doesn’t make them “evil”. That’s the definition of ‘for-profit’.
Should a not-for-profit community be heavily reliant on a for-profit product? probably not.
I would love to see an open-source solution for this. In the meantime (or in addition)- I’ll make sure to set up solutions such as caching to avoid everything going to shit in the event of a breakdown, or someone taking down the useful GoldMansaChs package.
Even if all of the above is fixed, there are still a whole bunch of potential failure points.

Think about your dependencies

How often have you updated your project’s dependencies to make sure you have the latest security patches?
And when you have- have you read their release notes? looked at the diff? looked at their source code at all? at their dependencies?
How many times have you gone through your package.json file and removed unused packages?
If you’re anything like me, your answer would be somewhere in the vicinity of ‘never’.
We need to realize that dependencies need to be managed; It’s not a ‘fire and forget’ action. “A package is for life, not just for Christmas”!


* My girlfriend's comments to this post: 
"I thought a repository is something you put up your bum"

First blog post – and temp header image

First blog post – and temp header image

So, I finally got motivated bored enough to start writing a blog. Sucks for you, I guess..
Staying true to lazy developer traditions, I just picked up one of the first images I could find under ‘useless machine’ to go with my useless blog. You can see it in action here:

If you can think of a better image for me to use, drop me a line!
Thanks, and hope to see you on my blog for future posts, hopefully with some actual content concerning software development and stuff, I guess.