JS developers who don’t know what closure is are fine.

JS developers who don’t know what closure is are fine.

Last month, JS Monthly London‘s host Guy Nesher gave a talk titled “JS interviews”.
The talk contained good explanations and examples of what are hoisting, closure, variable scopes, and other javascript gotchas that are so common in technical job interviews.

Guy stated that as “normal” developers, working in angularjs / react / backbone, we never really need to use things like closures.
If we use a linter / strict mode (or, honestly, just some common sense), hoisting is not something we’re going to encounter, and in general – all of that ‘advanced’ stuff  – prototypes,  apply, bind.. – that’s just stuff you need to know for your interview, and then you can forget about it and go actually do your job.

Guy likened this to a carpenter being asked at a job interview whether he can change a lock (no, I’m not a locksmith), or his opinions regarding some abstract wood-manufacturing techniques (I don’t care, I just cut the wood).

The underlying assumption was that
“a deeper understanding of Javascript is expected (but rarely used)”
(complete slides from the talk can be found here, courtesy of Guy)

Now, Guy is not your average developer, having spent years in law before making the switch to software, so it’s understandable that he has some different views.

And when I say ‘different’, I mean ‘bloody infuriating’.

You can’t be a driver without being able to change tyres

If you’re anything like me, you probably have steam coming out of your ears by now.
How dare this guy claim that you can be a competent developer without a solid understanding of the technology you’re using, of general software engineering and CS concepts such as SOLID, data structures, performance..?
What the hell does he take us for, some mindless code monkeys?

Of course you need a deep understanding of the technology and concepts behind the code you’re writing, otherwise you wouldn’t be able to understand why your code behaves or performs in a certain way, and you wouldn’t be able to debug it in certain situations!
You also wouldn’t be able to apply well-known solutions / patterns where appropriate, or understand the cost / benefit of using one technology over another.
That’s obvious!

So at the Q&A portion of the talk I posed Guy with the following question-
“Suppose you were interviewing me for a position, not as a carpenter, but as a driver.
You would ask me ‘do you know how to drive a car?’, and I would answer ‘Of course! Right pedal is to go, left pedal is to stop, and you control the direction with the wheel.’

‘Great’, you would say, ‘And suppose that, while you’re driving along, you get a flat tyre. Do you know how to handle that?’
‘I don’t really know car engineering in depth.. all I know is Right pedal is to go, left pedal is to stop…’.
Would you hire me? I can still get your car from point A to point B.
However, if anything goes wrong, I’ll be stuck.”

“But software development isn’t something you do alone, like driving.” Was Guy’s answer.
“In software development, you’ll have a senior developer on the team who’d know how to ‘change tyres’. A kind of a ‘pit crew mechanic’.
But the rest of the time, the ‘junior driver’ is going to be cruising along just fine.”

Are we all such special snowflakes like we’d like to think?

Guy’s answer lead me to do some thinking.

Doesn’t a lot of what I do, day-to-day, consists of ‘more of the same’?
CRUD over a database, some validations, showing some aggregation to the user…
In front-end development it’s even easier to spot:
Some form, an AJAX call to a server API, displaying data.. there isn’t even any business logic involved (hopefully).
A lot of the routine is… well, routine.

I don’t use any ‘special’ or ‘deep’ knowledge when I do the above things.
Not all of my time is spent innovating or ‘Engineering’.
A lot of what I do actually isn’t that special.
Especially if I’m using very high-level languages (ruby, JS), especially when using frameworks on top of those languages (rails / angularJS) to abstract away the ‘scary’ SQL or network operations.

So maybe Guy is right? for the 80% of routine work, you don’t need to hire a ninja rockstar hacker (or whatever the stupid buzzword du jour is);
Just have one senior guy within a team that can handle stuff like architecture, coding standards, tech evaluations, helping the more junior members when they’re stuck, and let the others get on with the day-to-day.

Bootcamps and the rise of the junior developer

The notion presented by Guy ties in very nicely with recent trends in the professional software world.

As demand for software developers is expected to rise at a “Much faster than average” rate, universities don’t produce new graduate at an increasing rate.
This gives rise to “code bootcamps“: intensive, 6-24 week programs, designed to bring you from zero to web-developer hero (or, more precisely, to junior web-developer).

china-hacker

The number of bootcamp graduates has been growing significantly in recent years.

This trend, if continues, means that we’ll be seeing more and more developers who, like Guy, have never studied algorithms, data structures, or anything else that might be ‘under the hood’ in day-to-day work.

It will be extremely interesting to follow these developers over the next few years, and see how well they’re able to make the transition from junior to mid to senior, and what are the differences in their performance compared to the other two large groups of developers – the formally educated and the self-taught.

What do I do with all these juniors?

Whether we like it or not, it seems that Guy’s (and the bootcamps’) vision is here to stay – more and more developers with little or no ‘deep’ knowledge in programming will be joining the workforce in the coming years.

Numbers alone dictate that – there is, and will be, a huge demand for developers, inevitably leading to lowering the entry barriers for newcomers, and making experienced developers that much more expensive.

That means that it’s very possible that you will end up on a team that is some sort of variation of what Guy has described – a few ‘drivers’, with one or two ‘pit stop mechanics’ to help them along.

It seems that the thing to do right now, instead of looking at these people down our collective noses, is to come up with an effective method for integrating and mentoring these newcomers.
Whether they’re interested in eventually becoming ‘mechanics’, or are content to just stay ‘drivers’, they’ll need our help.

I’ve personally been part of a couple of teams which included bootcamp graduates.
And, of course, I was once a junior myself.

In all these situations, I never thought the team had enough awareness for the need of junior developers to be mentored:
The assumption of the team was that after a suitable period of training, these developers are ‘ready’, and were therefore thrown into the deep end and were treated as any other team member.

In my opinion, mentoring / training of junior developers is better as a sustained, consistent process, as opposed to a ‘one and done’ job.

Here’s my $0.02:

  • Juniors need to receive feedback and advice on their work often, and over a long period of time.
    Instilling ideas and ways of thinking is a lengthy process.
  • Teams need to understand that having junior developers on the team doesn’t only mean that they (juniors) will be performing slower, due to their inexperience.
    It also means that more senior team members will be performing slower, due to the fact that they also need to be assisting their team members.
  • Having different skill levels should be reflected in the work being done by team members  – some tasks are more complex, or require greater knowledge and experience, so they shouldn’t be done by a junior.
  • It needs to be official: Managers need to let team members know that guiding / being  guided is part of their jobs descriptions.
    This will help to avoid friction from juniors who are perhaps too ‘proud’ to accept guidance, and from seniors who can’t be bothered to guide.
  • Making it official will also guarantee that the subject of training new team members is not forgotten or abandoned as projects and deadlines get more hectic.

Conclusion

This post began with a question – do you need to be intimately and deeply familiar with the tools that you’re using in order to be an effective developer?
The answer, in my opinion, is “No, but up to a point”.

You can be extremely productive in a lot of scenarios, not having a broad knowledge base.
If you want to progress beyond the ‘junior’ label, however, I think you need to expand your knowledge.

However, regardless of my, or anyone else’s, opinion on such developers, the reality is that we’re going to be seeing a lot more of that type of devs in the coming years;
Rising demand for developers, coupled with the rise of the “bootcamp” concept, mean that these junior developers are going to be coming into your team.

The question then becomes how to utilise these developers in order to produce the best quality (and quantity) of work?

People have been trying to answer this question for a while now.
However, it seems that the assumption is always that the individual is responsible for her own training, or, at most – that training is something that’s internal to the team.

I don’t think this is enough; companies need to understand that the success of their projects and their organization is dependent on the success of the juniors.
Therefore, there has to be a management commitment to making these people successful.

Training and supervising these guys takes time and resources, from both the junior and senior members of the team.
It also requires a slightly different work process where junior members are involved.

Also, how will this play out with other factors like the high pace of the industry, and the relatively high turnover rate in our field?
Will a company that needs a project done today be willing to invest in training an employee who might not be there tomorrow?

I guess we’ll see that soon enough.

Why rails sucks

Why rails sucks

Edit 2016-04-08: Lessons learned after 48 hours

So I got more feedback to this than I anticipated, mainly via reddit.
Some in the predictable “you’re ugly” form, but a lot of genuinely good suggestions on how to avoid / go around some of the pain points I describe in this post.

I think the main thing was that several people pointed out that it’s completely possible to have your models as POROs, and then use ActiveRecord (or alternatively, something like Sequel) for data access only.

To be perfectly honest, I’m not 100% sure what that would look like, and whether it would be as convenient as I’d like, but it would definitely be an improvement over sticking everything in an ActiveRecord class.

This solution would also greatly help with the issue I have with unit testing, enabling me to decouple those from the database, and definitely cut down on their runtime.

In conclusion – I wouldn’t say that rails is terrible / should not be used.
I would definitely say that it has some big traps which are harder to avoid than I’d like.

If I had to re-write this post today, I’d definitely change its title to “Some of Rails’s biggest gotchas”, and I wouldn’t say that you should categorically not use it.
You just need to use it carefully.

Thanks for reading, and for teaching me quite a few new things.

Here’s the original post:


 

Lately I’ve had the chance to work on a large-ish server application written in RoR. I went from “Wow look at all these cool conventions, I know exactly where everything needs to go!” to “err.. how the fuck do I scale this?” and “This is not where this should go!” in 8 weeks. And this is (partly) why:

(* Yes, this post’s title is a total clickbait. I don’t actually hate Rails; I just think it promotes a lot of bad practices.)

ActiveRecord sucks

I’ve never personally used the ActiveRecord pattern in previous projects; I always felt this would mix up domain concerns with persistence concerns, and create a bit of a mess.
Well guess what, I was right.

In the specific project I was working on, the code for domain classes would typically consist of 70% business logic, and 30% stuff to do with DB access (scopes, usually, as well as querying / fetching strategies).
That in itself is a pretty big warning sign that one class is doing too much.

The arguments as to why ActiveRecord in general is a bad idea are well documented; I’ll briefly recap here:

  1. It breaks the Single Responsibility Principle

    A model class’s responsibility is to encapsulate business rules and logic.
    It’s not responsible for communicating with data storage.

    As I mentioned before, a considerable amount of code in our project’s domain classes was dedicated to things like querying, which are not business logic.
    This causes domain classes to be bloated, and hard to decouple.

  2. It breaks the Interface Segregation Principle

    Have you ever, while debugging, listed the methods of one of your domain objects? Were you able to find the ones that you defined yourself? I wasn’t.
    Because they’re buried somewhere underneath endless ActiveRecord::Base methods such as find_by , save , save! , save?, save!!!, and save?!.

    Well, I made up a few there, but ActiveRecord::Base instances have over 100 methods, most of them public.

    ISP tells us that we should aspire to have small, easy-to-understand interfaces for our classes. Dozens of public methods on my classes is another indication that they’re doing waaaay too much.

  3. Its database abstraction is leaky

    Abstracting-away the database is notoriously hard. And I believe that ActiveRecord doesn’t do a particularly good job of this.

    As noted before, ActiveRecord::Base pollutes your public interface with a plethora of storage-related methods. This makes it very easy for a developer to make the mistake of using one of these very storage-specific methods (i.e column_for_attribute) inside a controller action, for example.

    Even using reload or update_attribute indicate that the using code knows a little too much about the underlying persistence layer.

“Unit” testing

If there was one thing I knew about Rails before having written a single line of ruby code, it was that everything in Rails is unit-tested. Rails promotes good testing practices. Hooray!

So, obviously, one of the first things I read about concerning RoR development, was how to test:

Testing support was woven into the Rails fabric from the beginning.

Right on! Finally, somebody gets it right!

Just about every Rails application interacts heavily with a database and, as a result, your tests will need a database to interact with as well. To write efficient tests, you’ll need to understand how to set up this database and populate it with sample data.

Say what?
I must have misread this.. let me check again..

your tests will need a database

Yup. I need a bloody database to test my business logic.

reddit-thats-not-how-this-works
So why is this so bad?

The meaning and intent behind unit tests is to test single units of code.
That means that if the test fails, there can only be one reason for it- the unit under test is broken.
This is why you fake everything external that the unit under test interacts with; you don’t want a bug in a dependency to cause your current unit test to fail.

For example, when testing the Payroll class’s total_salaries method, you use fake Employee objects, with a fake salary property, which would return a predefined value.
That way, if we get a wrong total_salaries value, we’ll know for sure that the problem lies within the Payroll class, and nowhere else.

But, with rails testing, you’re not encouraged to fake anything.
That way, if the total_salaries test fails, it can be because Employee is broken, or my database schema is wrong, or my database server is down, or even something as obscure as a child object of Employee has some required attribute missing, so it can’t be persisted, and an error is thrown.

This is not how a unit test is supposed to work.

Not only does Rails encourage you to write non-unit unit tests, it also makes it nearly impossible, and very dangerous to go around it and write proper unit tests.
(* Note that if you use hooks, such as before_update etc., it becomes even more horrible)

Apart from the horribleness of making it harder for me to determine what went wrong when a test fails, this complete abomination of a testing strategy caused me some more hair-tearing moments:

  1. Our unit test suite took 18 minutes to run. Even when using a local, in-memory database.
  2. My tests failed because the database wasn’t initialized properly.
  3. My tests failed because the database wasn’t cleaned properly by previous tests (WTF?).
  4. My tests failed because a mail template was defined incorrectly.
    Since sending an email was invoked in a before_create callback, failing to send an email caused the callback to fail, which caused create to fail, which meant that the record was not persisted, which meant that my test was fucked.

Too much magic

Magic is something that is inherent to any framework; by definition, if it does some sort of heavy lifting for you, some of it is going to be hidden from your view.

That’s doubly true for a framework which prefers “convention over configuration”,
meaning- if you name your class / method the correct way, it’s going to be “magically” wired up for you.

This kind of magic is fine and acceptable. The magic that I have a problem with is rails’ extensive usage of hooks (aka callbacks); Be it in the controller (before / after action), or in the model (before / after create / update / delete…).

Using callbacks immediately makes your code harder to read:
With regular methods, it’s easy to determine when your code is being executed – you can see the method being called right there in your code.
With callbacks, it’s not obvious when your code is being invoked, and by whom.

I’ve had several instances of scratching my head, trying to figure out why a certain instance variable was initialized for one controller action and not for another, only to track it down to a problem with a before_action callback of a parent class.
The fact that ActiveRecord callbacks can’t be turned off is also a pain in the ass when testing, as I described previously.

Additionally, callbacks are, of course, very hard to test, since their logic is so tightly coupled to other things in the model, and you can’t trigger them in isolation, but only as a side effect of another action.

This is the reason why some rubyist recognize that callbacks are at least problematic, if not to be avoided, while others prefer to implement node’s Express in ruby, rather than use Rails controllers.

Conclusion

I like the idea behind Rails. Convention over configuration is great, and I also totally subscribe to the notion that application developers should write more business-specific code, and less infrastructure code without any business value.

The problem with Rails isn’t that it’s an opinionated framework.
The problem is that its opinions are wrong:

  • Tying you down to a single, err, “controversial” persistence mechanism is wrong.
  • Making it impossible to do proper unit testing is wrong.
  • Encouraging you to do things as side-effects rather than explicitly is wrong.

When it first launched, Rails was revolutionary:  it was the first to offer such comprehensive guidelines, and support, to create your application in a standard way.
However, it seems that today, our standards of what is ‘correct’ or ‘recommended’ have changed, while Rails has stubbornly remained where it was 10 years ago.

You can still create a good application using Rails.
It’s just that it doesn’t  allow you to create a great application.