At the Gamification Europe event projects, individuals and companies were awarded for their efforts over the past 12 months (and probably longer). As a winner of the Outstanding Gamification Agency Award in 2017, I was also privileged to join the other winners, sponsors and more industry experts to judge the current crop of entries.
I know Dr Michael Wu spent hours trying to come up with an ideal format for this year’s competition and as he says it is a continuous improvement process to make the awards better over time. The criticism about the 2017 awards was that the popularity contest dominated voting, so the aim was to improve on this. As a first measure, the previous winners were taken out of the equation and levelled up into judging positions. Also each participant this year could only enter into one category and not just hedge their bets in all of them.
From the organisers perspective, they want to use the awards as a way to drive more community sharing as well as rewarding excellence. Hence the popularity contest element stayed and judges reviewed all of the entries to help draft up a shortlist of finalists. Judges also had a booster option to select one entry that didn’t make it into the finals and bring it forward. Then some additional game mechanics such as golden tickets and quantum leaps were added to give participants a chance to improve their position after the panel of judges had gone through a second review and detailed scoring on creativity, design and impact.
Below you will find the winners of 2018 with their award entry YouTube information. Congratulations to all and I hope it spurs your success on to the next level.
Outstanding Gamification for Inclusion and Diversity: Culture Shock
Outstanding Gamification Project in Audience Engagement: Siemens revolutionises selection process with Game-Based Assessments
Outstanding Gamification Project in Learning: Think Codex Customer Service Training Simulation
Outstanding Gamification Research: escapED
Outstanding Gamification Rookie: GamUp
Outstanding Gamification Software: Gamehill – Gamified Learning platform
If you also want to peruse the other finalists, then the best place to find them is the finalist listings.
My reflections as a judge
It took a lot of effort to review all the entries and without knowing exactly I would hazard a guess of over 20 hours of work was attached to give everyone a fair review. On top of a busy schedule no mean feat, but then all of us were in the same boat and I think it is also educational to see what people put forward. I want to make some observations in general, which I hope will help future participants.
In the original round before shortlisting, we saw a few I would say opportunistic entries where either no project had been completed yet or they just found themselves simply greater and better than everyone else. I am glad all fellow judges agreed not to put these forward. I was a bit bemused by these entries though. I personally didn’t understand why you would enter for an award when you had not completed or no evidence to back up your claims.
I liked the judge booster option to help in shortlisting one entry per judge that you felt was deserving and didn’t make the original grade based on grouped scoring of the full panel.
The one thing that varied significantly over all categories, was how the impact was recorded and demonstrated. In business, we are always asked about return on investment, impact, benefits and preferably as quantifiable as possible. In the entries, we had soft measures ranging from anecdotal feedback to self-measuring confidence, to more quantity-driven measures around hits, clicks and interactions and completions, which in reality are not an ideal reflection of impact. In a very limited few of the entries, we had a seriously meaningful impact on things like net promoter score, bottom line numbers, knowledge retention and confidence levels straight after and then 3 and 6 and 12 months later and significant sample sizes.
I think as an industry, the impact is what is important to our end-customer, so taking the latter measures as impact indicators will also help improve how we are seen as professionals.
What was difficult to judge sometimes is the actual solution design and visuals. For a lot of entries they were simply missing.
What I absolutely didn’t agree with personally is that the quantum leap used by a participant could completely overturn the results in a category after the judges had cast their final round votes. First of all, not all participants understood the concept of the quantum leap and due to not receiving emails and not reading blog posts due to travel or access issues not everyone used it and secondly I felt it made a joke of my time spent judging. Why should a participant have the last word, when we are trying to reward ‘outstanding’ performance.
For me, the quantum leap option is an example of bad design. The placement of this, in the judging process is something to be revisited if used at all. If you use it, then use at the start. Most entries will know they have a weaker area and can put this forward from the start. The golden ticket, which wasn’t active due to time limitations, would have had the same outcome of giving those that understood how it worked unfair advantage over others because they could receive coaching and then re-submit their entry. Again at the beginning, if everyone knows how then yes this makes sense if used at the end, it makes a joke of the ‘outstanding’, because effectively you are giving the participants the rights and encouraging them to game the system.
I personally feel if we are looking to reward excellence then we should just have a panel of experts giving marks with clearcut criteria and keep it simple. I encouraged a number of clients to enter, but they didn’t because of the complexity of the process and not wanting to enter something they didn’t understand. I am not convinced encouraging community and participants participation throughout a judging process is helpful. I feel a community forum may be more useful if curated well and the tone kept encouraging for all and not just a show-off forum. Maybe a stack overflow style model of sorts.
I think it is great that there are awards and that everyone is looking to keep improving how they are run. So I am hopeful that the next iteration will once again be an improvement on the current version because I do believe this year was an improvement on the popularity-driven contest of the previous year.
Anyway, well done again to the winners and for everyone else, there is always next time.