Getaround is the startup that is building a better alternative to car ownership. We’re driven by our vision of the economic, social and ecological impact that we can have in cities, and we’re breaking down traditional ideas about mobility. Since April 2019, Drivy is now part of Getaround. Together, we’re the world’s leading carsharing platform with a community of more than 5 million users sharing over 11,000 connected cars across 8 countries.

It’s an exciting time to be building the website, APIs and native web apps that facilitate this revolution. On this blog, members of the Getaround engineering team share tips, insights and lessons learned during this experience.

In Europe

The Team

Every quarter, the team members from our offices all around Europe gather in Paris for a big all hands presentation and party.

Our Tech Stack

Getaround (“Drivy” back then) was historically in PHP. Then in 2013, we migrated to Ruby on Rails and we’ve never looked back. We’re currently running Rails 5.2, as well as our native iOS and Android apps. Overall, we value solid quality code, supported with a lot of specs, that can be shipped to production multiple times a day.

We use Rspec, Capybara, Phantomjs and Jest, and integrate our code into CircleCI. We also monitor the production environment closely with dashboards built using Telegraf, InfluxDB and Grafana, just to make sure everything is ticking over nicely. We use Datadog (an ELK alternative) to parse and analyze our logs, and we check Bugsnag & Sentry to see if any 500 errors are causing issues for our users.

For our frontend we use ES6 with Babel. We work according to a solid internal styleguide, so that we don’t have to redesign forms and buttons every week. We have a growing set of Preact components for JS-heavy parts of the application. We also use Webpack instead of the classic Rails asset pipeline, and Yarn to manage dependencies.

As far as data is concerned, we use MySQL on RDS with multiple slaves and a dozen Redis instances to store things. We also pull data from multiple sources, then clean them all and update their schemas before they end up in Snowflake, our data warehouse. We create, schedule and run data pipelines with Apache Airflow, where tasks are written in Python, Ruby or Bash. We make heavy use of Embulk to move data between various datasources — mainly CSV, MySQL, PostgreSQL, Amazon S3 and Redshift. And then we make sense of all this data using Redash for our dashboards and Superset as our drill-down tool.


We don’t over engineer our processes: we keep workflows and tools simple, like our homemade automated release tool that is connected to Slack to remove headaches, avoid bugs in production and move fast. But we’re also not afraid to challenge these processes regularly, and make additions in order to keep improving things.