FT no longer has a Quality Assurance team.
Here’s how we arrived at that state.
In the mists of time QA staff, or testers, were our backstops - the people in between where the coding stopped and when the stuff got deployed. After several days, weeks, months of writing code a team of testers would pore over the changes, marking the designers & developers homework before sending it back, or in happier times wielding a stamp of approval and pressing ‘deploy’.
But times change and teams adopted continuous delivery and, of necessity, automated a lot of the testing, deployment, and general health of the ecosystem. The majority of the forward thinking projects now have a flow of code from localhost to production every day without any human intervention. It’s a productive way to work.
This left the traditional testing mindset disorientated - changes happen too frequently to test, and it’s automated, so what am I actually testing here?
The phrase 'shift left testing' has been a popular reaction to this.
The idea caught on at the FT - moving the testing responsibility to earlier in the development chain (no longer the backstops) to understand the whys and wherefores of what was being built, imparting their knowledge to the rest of the team, and to act as a sort of omniscient quality chaperone throughout the project.
This was quite helpful - catching bugs early is cheaper, coaching teams to think about quality & discuss what they are building throughout the development cycle (BDD etc.) helped us make progress.
Testing → Quality
So, the sole backstop no longer existed, and the testing team transformed into quality ambassadors by shifting left.
Testing in this sense, and particularly the word ‘quality’, is subjective, and has a much wider remit than a traditional box-ticking testing exercise - it’s a team game.
For example, an idea for a new feature that you want to test on a small percentage of users - quality here is about the calibre of the idea, and validating that as cheaply as possible, investment in AB testing etc. (Or perhaps the problem is that your team has no good ideas - and you need to invest effort in doing that better, not testing bad ideas!)
In other areas we have complex engineering problems. Say, optimising the performance of the website. This is about an investment across the project into the many contributing factors in make web pages render quickly. Quality here is about many nuanced trade-offs and deep technical understanding of how the system is constructed.
Or on the subject of uptime, fundamental to the quality of the product, who decides on what success looks like? The commercial department (expressed as revenue, reputation lost etc.) or the engineering and product team (expressed as cost benefit), and how do you ensure the operation to keep something alive 24/7 works effectively - it’s a collaboration.
These are all quality concerns - but far removed from traditional testing skills.
With a broadening definition of quality, that cuts across engineering, product, UX & design, data etc., a dedicated QA team wasn't able to contribute as materially to our outcomes as they could in the traditional testing model.
Effort sunk in to improving quality is ultimately about value and in the context of the whole project some things are more valuable than others.
As we jump into the new year our thoughts turn to priorities for 2018 - what does this look like without a QA team?
I see three themes - automation, security and data.
While many projects run a fully automated delivery pipeline, some are not there yet. Top of our priorities in engineering investment is the automation of the hard to reach parts - the aesthetic of web-pages, testing our SaaS vendors who have poorly developed automation API hooks, building more robustness (self-healing?) around critical features, and helping the projects still philosophically wedded to past traditions.
Another common theme in our conversations about next year is security.
How should we expand our growing suite of security tooling to ensure greater compliance and minimise risk (while maintaining velocity etc.) - from visualising where risks lie in our estate, scanning for keys and other faux pas, scaling the penetration testing to validate each release, to helping teams understand their exposure to security flaws in code we reuse, to diving deep into fraudulent behaviour.
What new teams need to form to swarm around these problems? What hires do we need to make?
The last theme is data.
As the technical ecosystem grows at the FT so do the metrics - billions of data points describing the performance of thousands of interconnected systems.
What investment do we need to make in data & algorithms to raise the visibility of the state of play ... What’s broken? What’s the consequence? What’s about to break? What happened? Who can fix it? Can we automatically repair it?
As an organisation we excel at using data to drive the commercial decisions, but have far less sophistication when it comes to our operational intelligence, both in the back-office and in our user-interfaces, reacting to and foreseeing problems.
Moving the investment we had in manual testing and quality ambassadorship to focus on these problems will, we think, yield greater returns.
Quality used to be focussed on testing, so we built a large testing team, but there’s often more value to be had in other areas when we think about quality as a whole, and that requires a multidisciplinary team to execute.
Having a dedicated QA team wasn’t helping everyone focus on baking quality into their projects and processes.
We decided that it’s no individual's responsibility to uphold or check quality - the team need to take a call. Quality is part of all our roles, a collective duty. The team can choose to invest the time adding quality (in all its guises) if they see it as adding value.
The leadership of the project, departments, company etc. can help incentivise, orchestrate and steer investment in things that will improve this holistic idea of quality.
Removing dedicated QA roles and reinvesting in engineering skills reinforces our notion.