Best Practices for Large Features

July 31, 2017 – Howard Wilson – 8-minute read

This article was written before Drivy was acquired by Getaround, and became Getaround EU. Some references to Drivy may therefore remain in the post

As developers, we sometimes find ourselves faced with feature requests that will take weeks of work and touch many areas of the codebase. This comes with increased risk in terms of how we spend our time and whether things break when we come to release.

Examples might be moving from a single to multi-tennant application (scoping everything by accounts), or supporting multiple currencies or time zones. This post brings together some tips that we find useful at Drivy for approaching these types of problems.

In general, our goals are to build the right thing, take the right implementation approach, and not to break anything. We’d like to try to do those things pretty quickly, too!

Building the Right Thing

When working on large features, the cost of building the wrong thing is higher than usual, since there is more time between receiving a feature request and presenting the completed feature back to the product team for validation. This makes up-front communication a very important part of the process, especially since more agile startups often don’t use formal specification documents.

Assumptions

One way to avoid misunderstandings is to make a list of assumptions. Check these with the product team and include them in pull requests so any reviewers are aware of them (and can challenge them). Assumptions might take the following form:

  • It’s not possible for a user to be in state X and Y at the same time
  • This feature will only be implemented for countries X and Y
  • There’s no need to update old data to be consistent with the new rules

Smaller Features

It’s always worth questioning whether a large feature really does need to be released all in one go. Are there smaller pieces which all add value incrementally? For example, Drivy now supports precise times for rental start and end, instead of just AM/PM time slots. But we didn’t need to make this change all in one go. We started with bookings, then moved on to booking changes, and eventually the state changes of the rentals themselves.

Taking the Right Implementation Approach

There are normally several ways to solve a problem. Taking the right implementation approach is the “tech side” of building the right thing. In other words, “will we solve this problem in a way that the tech team generally agrees is appropriate?”

Naming

Often, naming is a useful place to start. Getting a few developers together to talk about what any new entities or concepts will be called can help to identify the right abstractions in our code. Even if sometimes they feel like isolated implementation details, the abstractions developers select can strongly influence terminology and understanding across other teams in the company. For example, modeling an Order rather than a Request can have a profound impact on the perceived urgency of that user action.

Are data flows or processes changing? Even if not explicitly designing a state machine, it’s exactly these kinds of problems that typically take a lot longer to discuss and get right, than to implement. There’s nothing wrong with taking time up front to properly explore the different possibilities. Draw on the whiteboard and get the opinions of the rest of your team!

Spike / Prototype

The purpose of a spike is to gain the knowledge necessary to reduce the risk of a technical approach. It also often gives developers enough confidence to more accurately estimate how long the feature will take to develop.

Some general guidelines:

  • Aim to cover enough of the feature to model the majority use case, but also explore any worrying edge cases
  • Take plenty of notes when questions/concerns arise, possibly split into product/tech
  • Tests are not mandatory, but simple acceptance tests may be useful
  • Keep a release plan in mind and, if possible, split the spike into deployable commits (more on this later)

First Code Review

Reviewing code is hard. A reviewer is expected to make sure the code is correct and of a high quality before it gets merged into the release branch. In order to do this effectively, it’s usually best to keep the size of the pull request to a minimum. But how then will a reviewer be able to get an end-to-end perspective of your implementation?

One answer is to split code reviews which validate a general approach from code reviews which accept code into production. Here, we’re doing the first type of review. It’s a review best suited to a senior member of the team; ideally someone who has a broad knowledge of the codebase.

Here’s what we like to do:

  • Split the spike into commits which roughly represent separately releasable work areas. This helps to keep focused on the goal of minimizing risk when releasing.
  • Describe each one of these commits separately. A reviewer is then able to focus on smaller chunks of code, even if the overall pull request is quite large.
  • Create a simple release plan, paying special attention to any data migrations - these can be more time consuming and error prone than anticipated.
  • Include all the technical questions which arose while building the spike
  • Include screenshots

Again, the goal at this point is to validate the approach, but without losing sight of how we’ll structure the feature for release. This code isn’t going to be merged into the release branch in it’s current form.

Iterate on Feedback

This takes a little more time if the pull request has already been split into separate commits, but git helps us re-write our branch. There’s plenty of information on how to go about this in the related post: “Editing your git history with rebase for cleaner pull requests”.

Of course, there are lots of visual tools too (such as Gitup), which get the same job done without using the command line.

Once the branch is updated, we force push back to the same remote branch to preserve a clean commit history.

Minimizing Risk When Releasing

Test Preconditions

Before starting to release any code, it can be worth verifying that things expected to be true in production are actually true. Let’s say our new feature is going to introduce behavior which depends on the country of active cars. We can check in the database to ensure that the expectation “an active car always has a country” is true, but that doesn’t give 100% confidence. It may be true a few seconds after activation, but not immediately.

What we can do in cases like this is introduce some logging where our feature will go:

if car.active? && car.country.blank?
  Rails.logger.warn "Active car #{car.id} doesn't have a country, which is required by feature #123"
end

Once we have more confidence in this precondition, the logging can be replaced with a guard clause which raises an exception. Not only does this mean that we can be confident in our assumptions in production, but other developers will also understand immediately which preconditions are satisfied and benefit from the same confidence when coding.

Final Code Reviews

Now it’s time to get the feature into production. Typically, having split the original pull request, each commit can be git cherry-pick‘d to a new branch in turn and then improved to a production-ready state:

  • Link to the original PR/issue for context
  • Add complete tests if they’re not already present
  • Ask the question: can this be rolled back if something goes wrong?
  • Split out migrations if they need to be run before the code is deployed
  • Soft release the feature behind a feature flipper if it shouln’t actually be visible until later

This time, Github’s suggested reviewers facility is a good way to find someone on your team to review the code. The suggestions are based on git blame data, so they’ll be people who are familiar with the code being changed. Their goal is to confirm that the changes are safe to release. This should be a much quicker process, since the quantity of code is smaller, and the overall approach has already been agreed.

Checker Jobs / Monitoring

We often write small jobs to test the long term “postconditions” of a feature. Or in other words, ensuring ongoing data consistency. Usually this concerns aggregate data that is difficult to verify synchronously. For example, checker jobs might verify that there are no overlapping rentals for the same car, or that there are no gaps in the numbering of our tax documents.

These jobs usually send an email to the appropriate team if inconsistencies are detected. They’re cheap to create, and can help to catch unexpected outcomes before they become too problematic.

Conclusion

Remember, this is just a selection of ideas that we find work well for us. Your mileage may vary!

Did you enjoy this post? Join Getaround's engineering team!
View openings