We ran interviews with 50 users over 2 quarters in 4 countries (UK, USA, Germany, France). They represented our key markets and while we had done plenty of research around their job search experience, until now, we had always put aside feedback around career progression. In these interviews users expressed their difficulties managing life and opportunities to develop professionally.
During a 2 day workshop in Paris we developed our set of personas, a subset of CareerBuilder's core persona groups.
We didn't uncover everything up front. Our early designs exposed our ignorance in a field we held a strong self-confidence bias in.
The process of pivoting when a hypothesis was falsified was embraced by the team as we loved the problem rather than our solutions.
Users presented an expected behaviour in the absence of data but a peculiar response when it became available. I wrote more on this paradigm in an article entitled 'Building features nobody wants'.
Showing users the information alone wasn't enough. They often needed a direct actionable insight to take from the data. It forced us to rethink how to present the data we were mining from our system, adding another layer of abstraction to it in order to make it useful.
Users had a strong compulsion to compare themselves to their peers, especially their skills. They saw this as an opportunity to upskill and improve their career prospects.
We really went mad experimenting with a plethora of visualisations. But when we ran A/B tests against our control (a bar chart), it won every time. Users understood the mechanics of a bar chart over a Voronoi chart, Sankey diagram or any other elaborate visualisation.
CareerBuilder had available to them competing skills parsing engines. These algorythms looked at words in a PDF and matched them to skills where possible. We built a small experiment to test which users felt best represented their parsed CV.
Users uploaded their CV, the parsing would take place with one of the three engines and then they would rank how relevant the tagged skills were. We used the data to later select one of the parsing engines as well as provide data to the teams who worked on these three competing engines.
We used this interface to stress test 3 different skill tagging engines.
Comparing various data points to one another.
An example of doing what users say rather than what they need.
One of several attempts to diversify our visualisations.
Iteration 50 looked like this.
Using guages to give a high level skill match.