There have been some comments asking for specific details on the system I am trying to build, which are now at the bottom of the question.
I am designing a system that allows users to enter content to describe certain physical objects. Objects can attributes such as a height, weight, a name and a description, and so on. Different objects can be stored, with different relationships between them, with up to 10 different attributes per 'item'. However, historically, users have been reluctant to enter values for all of these fields due to time constraints or perceived effort. There has been a strong organisation push to force these people to add these fields after the fact, but this has not been wildly successful.
While it would be possible to enforce certain fields, this isn't always possible as the system must allow for blank attributes to be stored to allow for incremental saves of work. One possible strategy is to use a gamification strategy with a progress bar that shows how "complete" an item is.
A possible solution I am working on is that each field could have rules that are assigned a weight, and the sum of these weights is how "complete" each item is. For example, the StackOverflow careers site does this, by doing some magic on your CV and telling you how "complete" it is:
The initial thought is that the system can rules to check each field against (specifically rules that can be processed by a computer):
Rule 1. If there is a field named "name" that has content, add weight 10
Rule 2. If there is a field named "description" that has content, add weight 5
Rule 3. If there is a field named "description" over 100 characters, add weight 5
So an item:
name: "cat" description: "a create of the genus feline"
would be given a completion score of 15/20 or 75%.
However, I am concerned about the organisation will get too focused on everything being 100% percent, rather than everything being "good" and likewise, people adding cruft to fields when they aren't needed.
Another possible solution I'm considering is a 'star' based approach where a 5-star ranking could be achieved with less than 100%, as seen in the above rules, description has 2 rules attached to it. So depending on how scores are scaled the
cat example could get 5-stars, however if there are improvements to be made, what could be done to encourage more work?
So my challenge is:
About the system I am building:
I understand this was a bit of a tough question that doesn't give concrete examples - but that was for two reasons, a) I didn't want to look like I was promoting a product and b) it has some very technical depth that might obscure the problem. I'll provide a brief background below, and for more details you can read this post I wrote seeking feedback from my community.
I am the lead developer for an ISO/IEC 11179 open-source metadata registry (think column descriptions, not telephone numbers), that has wide use in academic and government fields for describing data columns with extreme specificity for governance purposes and to ensure that data that is returned meets certain committee guidelines. Writing this content is dry and tough.
For example, we all take the concept of "the age of a person" (This is a link to this example) as a given, however its not - for that to mean anything you need to first define the object class "person", then define property "age" then define what the concept "person-age" really means. Before you ask, yes its important that you define these three things, there are very strong domain reasons for this that aren't worth going into here.
Along with this every item can have fields such as:
Its been noted that previously people have been reluctant to fill some fields out, due to perceived workloads or lack of context, but have ultimately been required to go back an fill fields in under managerial direction - i.e. all stick, no carrot. Also, since this is a non-volunteer business system, ranking users is not an option. Similarly, giving points to the number of changes to items isn't applicable as this is non-optional work, they have to do it.
What I am however trying to work on is a way to 'grade' the 'quality' of each metadata item. Without getting to technical, managers will be able to specify validation rules that grade each field that can be summed to give an indication of how complete an item is. If we assume that a raw score can be given, eg. an Object Class might be ranked out of 75, while a Data Element Concept can be ranked out of 104, then I am thinking these might need to be normalised to some standard a figure.
An example of this is below where an item could be given a star rating or progress bar. The idea being that these would be there from the very first change on a brand new item, so its likely that many items would start at 0 then build up over time.
The main problem I am trying to ensure is that whatever ranking system is used needs to encourage users to fill in content that is applicable, understand that rankings aren't fixed, but also not strive for 'progress bar' completion if thats undesirable.
The example here is "Person-age" from above. The description for this might be quite short as its a relatively self explanatory idea, but if on the whole we rank a "Person-age" out of 104 where 2 of those points are given if the description is over 50 words long. So if the short description is good, how to we how do we help a user get to score 102, and stop them from writing more just to get 104 when that might not be optimal.
You bring up a very interesting challenge. Most people would tend to use gamification like you initially are due to lack of engagement, but ultimately the thought of users wanting to engage so much so they actually start entering bad data or taking actions they normally otherwise wouldn't is overlooked, but then again, is it a real problem?
A progress bar is a progress bar. A user is going to interpret it as they wish, so I don't believe spending a lot of time on this portion is going to yield many results. Ultimately, you're going to end up with some sort of tracker, whether that's a percent, a number of stars, or a dancing purple penguin with different amounts of lollipops in its hand; you have no way to guarantee the user will interpret it how you want them to. The best you could do is something like SO, where an item would have a total amount of points earned, so while the number can always increase, you'll prioritize the lowest scored items, but still reward users for updating older items even if they have already earned a lot of points.
We don't have too much information on the types of data you're gathering, but you could try to validate certain things like length or specificity. For example, with a weight field, you could give more points or stars for an entry with a weight of 34.23 oz vs 30 oz. Unfortunately, there's not going to be a way to prevent a user who truly wants to game the system from gaming the system unless you're going to pour resources into machine learning, heuristics, and manual reviewing every edit, which ultimately is costing you more than you're getting out of implementing something like this to begin with.
You could offer a default value (e.g. a single point or .1 of a star, etc.) for every edit a user makes. This again would be easy to abuse, but also easy to automatically audit and see if a user is abusing, while still giving users who go back to update a completed field with a new or more specific value some type of benefit in the form of some points.
I'd stay away from a dual system, mainly because you said you're already having engagement issues, and creating a complex or complicated gamification version isn't likely to help.
Here's a quick bullet list of my summed up thoughts:
There's a million different variations you can do for scoring but ultimately your entire goal should be to increase user engagement, anything beyond that should be a welcomed problem compared to where you're currently at.
I'd suggest not using a progress bar or any explicit measure, instead give candy. The first step is to get your users to enter an acceptable level of data, once this is done show them a green light.
Next encourage them with messages to enter more data. If they enter any data give them a star or other token (design in keeping with the nature of your system).
Pat their head for entering that data but say now they can earn another star if they enter more. That way they are rewarded for work done not for entering 10 items = 100%.
Make the first star easy to earn (one piece of data) but any more are harder work. Also keep the weight for different kinds of data.
You should have some kind of moderation where other users could check the data that had been entered. That would mitigate the quality problem.
Ideally the star should have some kind of meaning elsewhere in the system, i.e. stars can be spent on something meaningful or give status to the user. This is the key to gamification I think -- earning useful rewards, not simply seeing a status bar at 100%.
I feel like you might be jumping to a solution before you've really decided if it is right for the problem. It sounds like the real issue is that data entry sucks, which it does, but is gamificiation a solution for this or just a band-aid?
Let's take a different example: you're onboarding a user and show them how to add something new. Instead of putting effort into making the animations and flow of doing that onboarding, why not make creating that first item so dead simple and obvious you don't even need the onboarding?
Is there any way you can make the data entry easier or more fun? Use sliders or autosuggestion or common answers? Often introducing some physical gesture (like drag and drop or sliders) can make it less tedious. Show only one data field at a time (this part might be a good place to insert gamificiation if you want: almost there, make the item go from weak to healthy, etc). Or show as many as possible and make it super easy to hit tab to the next one or select from preselected options? If you have a suggestion the user might be more motivated to correct it.
Gamificiation might indeed be a good solution for you put it can be saccharine and annoying if overused. I'd go back to square one and really figure out a way to solve the real problem, which is that no one likes data entry.
If an item is almost complete add an "Sign off as complete" button below the details. If the user presses the button, the item is set to state Complete and gets an extra line "Signed off as complete by UserName". The company should be clear about users only signing off on really complete items (with periodical sparse checks on items which are marked as complete)
An item can also be marked as Complete if it is in progress (for the 10% of items needing less fields), but in this case either two people have to approve it as complete, or they are always flagged for management to check. There should be a checkbox "This item is complete and needs only X Fields" before the "Sign off" Button becomes available in this case.
People will try to get all items to be green checkmarks in the system, but will only sign off on really completed items because their name is written as "signed off by" so they will probably check the quality. On the other hand you don't rank users on the number of checkmarks they achieved, because 90% of the work on an item could be done by user A and the checkmark by user B.