When conducting usability testing, a variable that comes up is the competence or familiarity of different users with Internet or software application technology. It seems strange to compare different usability studies and results if there are no indication of the type of users that are being tested. Is there a standard test used to establish a baseline for user competency or proficiency with general computer technology? I imagine this would be useful for older users that may not have had as much exposure to computers and working out how much different there might be in terms of looking at task completion rate and task completion time.
I am interested in whether this is a standard item provided in a usability report or carried out as part of usability testing screening, and if so then whether there is a standard that is commonly used or if it is generally product/project specific.
Update: Looking at the 2007 Microsoft Office Fluent UI Study Information Workers by Forrester, I noticed that they included a definition of General versus Advanced users by asking two questions about Microsoft Office Product usage and also showing them examples of advanced features. Only people who fit in the criteria for advanced users are classified in that category. Could something similar be developed and standardized for usability testing in general?
There isn't any, because "computer proficiency" is a vague term. Is a programmer more computer proficient than a secretary who can lay out a colorful document in Word? Something as vague as computer proficiency cannot be measured and therefore cannot be part of a useful research question.
So, you need to make your proficiency test specific to the thing you're actually testing for. Does being a programmer make you a better web surfer? It's the same outside this specific field: e.g. is a carpenter better at painting than a musician?
If you need to establish a user's basic skill level, you can use a questionnaire like in How to best ask for computer experience in a survey?. However, it would probably best to test the actual skill level in the exact area you're interested in. Questionnaires have the risk of users over- or understating their actual skill level.
Before the study ever begins, participants are recruited by asking them a series of survey-like questions (the "screener") which is designed to include the types of people you want to test and exclude the people you don't.
If the goal is to test a diverse audience, the screener will include questions that would flag people for various traits... for example, people who are unusually technical, people who have an average level of technical knowledge, and people who are fairly inexperienced with technology.
The report may or may not identify different user groups, depending on a variety of factors. For example, if 100% of participants didn't see the "sign up" button, identifying their criteria may not be helpful. However, if 0% of participants who were inexperienced or average in terms of technical skill saw the "sign up" button, and 100% of participants who were very experienced saw the button, that would definitely be mentioned in a usability report. (Exaggerated for the sake of simplicity.)
Here's a good overview of a typical screener process: http://www.uie.com/articles/usability_testing_three_steps/