January 11, 2018 – Victor Mours – 3-minute read
If you don’t have a full day of free time ahead of you to watch all the talks, here are some highlights of the ones we recommend checking out first.
If you haven’t had time to play with
await, you could take a look at the documentation by yourself, or you could just kick back, relax, and let Wes Bos do the explaining. He lays out really nicely how you can simplify your code when you’re chaining synchronous promises, and how to handle errors in this case.
Trent’s talk explores the consequences of the arrival of Headless Chrome in our testing toolkit. Chrome headless now comes with Puppeteer, an API for controlling the browser and its DevTools, which makes for a much better developer experience than the old school Selenium-driven scripts.
Along with the DevTool profiler and the accessibility testing library
axe-core, these open up the possibility of a shift of what we expect from our tests. We can now go from “does this code work?” to “how well does this code work?”. This allows a more nuanced, yet measurable way of seeing our code.
I think what really lies beyond this is the topic of code metrics. While we have a fairly established way of taking code validity into account in the standard development workflow - either the test suite runs a green build and the code can be merged, or it is red and must be fixed before merging - there doesn’t seem to be a standard way of taking code quality metrics into account.
One way some teams deal with this is to consider the build to be green if it improves code quality metrics overall, and red if it doesn’t. But that process can break down easily. What if a change to the codebase were to considerably improve accessibility, while degrading performance a bit?
Would it be acceptable to merge it? Obviously the answer is “It depends”. If we want to make informed decisions based on code quality, we should also be considering other factors that are harder to measure, such as code readability and maintainability.
I feel like these are still open problems, and I’m excited to see what comes out of the more widespread use of these tools in the years to come.
The video of the talk is not online yet, but this article from Trent’s blog is worth a read.