This blog post details the ideal process I would like to follow when working as a software developer. It lists the activities I find most beneficial at an hourly, daily, and weekly basis.
Many of the systems and processes below I’ve followed myself, and found useful. Others – I only had the opportunity to read or hear about, but have not tried.
Like any ideal, it’s not always fully achievable, or even realistic at points. But it’s important for me to have a “north star” to aim towards, so I know which direction I’d like to move in.
The audience for this post is:
- Me: To clarify my own thoughts, and to refer back to when thinking about making changes in how I work.
- Colleagues, team-mates, managers: To articulate what my agenda is, what kind of changes I may propose to our working arrangements, and why.
- Anyone else: To gather feedback, suggestions, or hear about their own experiences with these patterns and practices.
I’ll organize the processes I like to follow into different timeframes, or “feedback loops”.
Knowing whether we’re on the right track or not as soon as possible is one of the most important things in our work. Therefore, quick, tight feedback loops are paramount.
You’ll notice a high degree of commanalities between the different loops. Essentially, it’s the same process, only at different scales: Make a small step, verify, put it out of your head, move to the next step. Frequently stop and evaluate if we’re on the right track. Repeat
Inner loop: Implementation. Time frame: minutes
This is the core software development loop – Make a small change, verify that it works. And another one, and another. Then commit to source control, repeat.
I start this loop with writing an automated test that describes and verifies the behaviour I’m implementing*. Then I’d write the minimal amount of unstructured, “hacky” code, to make that test pass.
And then another one, and another one. Over and over.
This is a long series of very very small steps. (For example – running tests on every 1-2 lines of code changed, commiting every 1-10 lines of code changed.)
I would defer any “refactoring” or “tidying up” until the last possible moment. Usually after I’ve implemented all the functionality in this particular area.
That may even take a few days (and a few PRs).
That’s because I’m always learning, as I implement more functionality. I’m learning about the business problem I’m solving. About the domain. About the details of the implementation.
I’d like to refactor once I have the maximum level of knowledge, and not before.
Personal experience: I found that the only way to verify every single small change, dozens of times an hour, is with automated tests. The alternatives (e.g. going to the browser and clicking around) are too cumbersome.
I love working in this way. I can make progress very quickly without context-switching between code and browser.
I can commit a small chunk of functionality, forget about it, and move on. Thus decreasing my cognitive load.
Additionally, automated tests give me a very high degree of confidence.
Ideally, I’d push code to production without ever opening a browser (well, maybe just once or twice..)
*A short appendix on testing:
I mentioned that I’d like to test the behaviour I’m trying to implement.
I don’t want to test (like is often the case with “isolated unit tests”) the implementation details (e.g. “class A calls class B’s method X with arguments 1, 2, 3).
Testing the implementation doesn’t provide a high degree of confidence that the software behaves as intended.
It also hinders further changes to the software (I wrote a whole blog post about this).
My ideal tests would test the user-facing output of the service I’m working on (e.g. a JSON API response, or rendered HTML).
I would only fake modules that are outside of the system (e.g. database, 3rd party APIs, external systems).
But everything within the scope of the system behaves like it would in production. Thus, providing a high degree of confidence.
You can find much more detail in this life-changing (really!) conference talk that forever changed the way I practice TDD.
Second loop: Deployment. Time frame: hours / one day
I’ve now done a few hours of repeating the implementation loop. I should have some functionality that is somewhat useful to a user of the software.
At this point I’d like to put it in front of a customer, and verify that it actually achieves something useful.
In case the change is not that useful yet (for example – it’s implementing one step out of a multi-step process), I’d still like to test and deploy the code, behind a feature gate.
Before deploying, I’d get feedback on the quality of my work.
I’d ask any interested colleagues to review the code I wrote (in case I wasn’t pairing with them this whole time).
Pull / merge requests are standard in the industry, and are a convenient way to showcase changes. But an asynchronous review process is too slow – I’d like to get my changes reviewed and merged faster.
I’d want my teammates to provide feedback in a matter of minutes, rather than hours. And I’ll follow up with a synchronous, face to face conversation, if there’s any discussion to be had.
(In return, I will review my colleagues’ work as quickly as possible as well :))
If the changes are significant, sensitive, or change code that is used in many places, I may ask a teammate to manually verify them as well. or double-check for regressions in other areas.
I may ask a customer-minded colleague, such as a product person, or a designer, to have a look as well.
Once I’ve got my thumbs-up (hopefully in no more than an hour or two) I’ll merge my changes to the mainline branch.
The continuous delivery pipeline will pick that up automatically, package up the code, and run acceptance / smoke tests. After 30-60 minutes, this new version of the software will be in front of customers.
Personal experience: Working in this way meant that I could finish a small piece of work, put it out of my mind, and concentrate on the next one. That’s been immensly helpful in keeping me focussed, and reducing my cognitive load.
Additionally, it’s very helpful in case anything does go wrong in production. I know that the bug is likely related to the very small change I made recently.
Once I’ve finished a discrete piece of work, I need to figure out what to do next.
Getting feedback on our team’s work is the most important thing, so I’ll prioritise the tasks that are closest to achieving that.
Meaning – any task on the team that is closest to being shipped (and so, to getting feedback), is the most important task right now.
So I’ll focus on getting the most “advanced” task over the line. It may be by reviewing a colleague’s work, by helping them get unblocked, or simply by collaborating with them to make their development process faster.
Only if there isn’t a task in progress that I can move forward, I’ll pick up the next most important task for the team to do, from our prioritised backlog.
Personal experience: The experience of a team working in this way was the same as the individual experience I described above.
As a team, we were able to finish a small piece of work, put it out of our minds, and concentrate on the next one.
We avoided ineffective ways of working, such as starting multiple things at once while waiting for reviews, or long-running development efforts that are harder to test and to review. We always had something working to show for our work, rather than multiple half-finished things.
Working in this way also helped the team collaborate more closely, focussing on the team’s goals.
Third loop: Development Iteration. Time frame: 1-2 weeks
We’ve now done a few days of repeating the deployment loop. We should have a feature or improvement that is rather useful to a user of the software.
The team would speak to users of the software, and hear their feedback on it. Preferably in person.
Even if the feature is not “generally available” yet, “demo-ing” the changes to customers is still valuable.
The feedback from customers, as well as our team’s plans, company goals, industry trends etc. will inform our plans and priorities for the next iteration. The team (collaboratively, not just “managers” or “product owners”) will create its prioritised backlog based on those.
This point in time is also a good opportunity for the team to reflect and improve.
Are we happy with the value we delivered during this iteration? Was it the right thing for the customer? Are we satisfied with the quality of it? the speed at which we delivered? It’s a good point to discuss how we can deliver more value, at higher quality, faster, in the future.
What’s stopping us from improving, and how can we remove those impediments?
We can use metrics, such as the DORA “4 key metrics” to inform that conversation.
We plan and prioritise actions to realise those improvements.
(Some examples of such actions: improvements to the speed and reliability of the CI / CD pipeline; improvements to the time it takes to execute tests locally; simplifying code that we found hard to work with; exploring different ways to engage with customers and get their input; improvement to our monitoring tools to enable speedier detection and mitigation of production errors.)
We can also create, and reflect on, time-bound “experiments” to the way we work, and see if they move the needle on the speed / quality of our delivery (examples of such experiments: pair on all development tasks; institute a weekly “refinement” meeting with the whole team; have a daily call with customers…).
Personal experience: I only have “anti-experiences” here, I’m afraid. I’ve worked in many “agile” flavours, including many forms of scrum and kanban. I haven’t found any one system to be inherently “better” than the others.
I did find common problems with all of them.
The issue with agile that I observed in ~100% of the teams I’ve been on, is this:
we use some process blindly, without understanding why, or what its value or intended outcome is. We’re not being agile – we’re just following some arbitrary process that doesn’t help us improve.
My ideal process would involve a team that understands what it is we’re trying to improve (e.g. speed / quality / product-market fit).
We understand how our current process is meant to serve that. We make changes that are designed to improve our outcomes.
In that case, it doesn’t matter if we meet every day to talk about our tasks, or if we play poker every 2 weeks, or whatever.
So, what do you think?
This list is incomplete; I can go on forever about larger feedback loops (e.g. a quarterly feedback loop), or go into more details on the specifics of the principles and processes. It’ll never end. I hope I’ve been able to capture the essence of what’s important (to me) in a software development process.
What’s your opinion? Are these the right things to be aspiring to? Are these achievable? What have I missed?
Let me know in the comments.