«Usability Issues Associated with Converting Establishment Surveys to Web-Based Data Collection Jean E. Fox William Mockovak Sylvia K. Fisher ...»
Usability Issues Associated with Converting Establishment Surveys to
Web-Based Data Collection
Jean E. Fox
Sylvia K. Fisher
Office of Survey Methods Research, Bureau of Labor Statistics
In an effort to combat non-response, survey managers continually seek new ways to encourage
respondents to participate in their surveys. One approach is to offer respondents the option of selecting
from multiple reporting modes so that they can select the mode they prefer. The Internet is one of the newest modes available and offers a variety of benefits. For example, respondents can access the Internet easily from their desktop PCs, so they can complete the survey at their convenience. Properly designed surveys can introduce instructions, edits, and help screens that simplify the respondents’ task by guiding them through the completion process. From a survey manager’s point of view, the Internet eliminates or reduces data entry costs, because respondents enter data themselves. Further, Web surveys can check data as the respondent works, so the need for follow-up phone calls or post-data collection processes is minimized. With these obvious benefits, the Internet offers the potential for enhancing response rates, improving data quality, and improving timeliness of reporting. In addition, the potential for cost savings also exists, although in some cases offering an additional data collection mode might actually increase costs.
On the other hand, there are some possible drawbacks to Web data collection. One major problem is that the use of multiple data-collection modes complicates data integration and survey operations, such as follow-up efforts. Further, developing Web surveys can actually raise up-front costs. The cost of building, maintaining, and integrating different systems is expensive. Moreover, preliminary research with Web surveys indicates that rather than enhancing response rates, offering multiple modes can actually depress overall response rates (Griffin et al. 2001). For a detailed discussion of these and other problems, see Fricker and Schonlau (2002).
In establishment surveys, the Internet is likely to be one of several reporting options that may include mail, phone, and fax. While respondents may select another reporting mode if a Web survey is too difficult to complete, they may also decide not to report at all. Further, since respondents often participate in more than one government survey, a respondent may generalize from a negative experience on one Web survey to others, even though different agencies may be involved. Therefore, to encourage participation, survey managers need to design Web surveys that willprovide as positive an experience as possible for the widest range of respondents. A key element of that design is ensuring the usability of Web surveys.
This paper focuses on the usability of dedicated Web-based government surveys, where usability is defined as the effectiveness, efficiency, and satisfaction experienced by respondents as they provide the requested survey data. At BLS, we are dedicated to developing usable Web surveys. This paper describes our experiences and lessons learned in designing Web surveys for establishments.
2. Usability Issues in Web Survey Design As with any new technology, early attempts to develop Web surveys have relied largely on existing conventions for Web design, coupled with research on designing surveys for other modes, and the personal preferences of designers. This heuristic approach is understandable, because research regarding the design of large government Web-based surveys is still limited. However, after some experience at BLS, we have identified several important issues related to the usability of Web surveys.
Following are some of the design considerations and constraints that we believe federal survey managers should be aware of when considering the use of Web-based surveys.
2.1 Importance of Standardization across Surveys Many government agencies conduct numerous establishment surveys, which means that in some cases, the same establishment (and respondent) responds to more than one survey. From a respondent’s perspective, it is logical to expect that the look and feel of all Web surveys from the same agency will be similar. To accommodate respondents and allow for adequate security, the Bureau of Labor Statistics offers a common portal or gateway into its data collection Website, called the “Internet Data Collection Facility” (IDCF).
In addition to a common gateway, the IDCF requires that all surveys meet internal standards for user interfaces.1 One of the challenges of applying these standards was that the early adopters (i.e., surveys introducing Web collection first) were designing their Web survey as the standards were being developed. Therefore, these survey managers had the extra responsibility of providing input to determine appropriate standards. On the other hand, later adopters were faced with some established standards that were not quite appropriate for their purposes. Once standards are in place, they are often difficult and costly to change. At BLS, we are just beginning the process of reviewing our standards.
We expect that support for changes will come from research, from respondents, and from requests made by survey managers using Web-based data collection.
2.2 Consistency across Survey Data-Collection Modes Research has found that different modes of data collection for identical content can produce different results (e.g., Dillman, 2000; Dillman et al., in press). As noted by Couper2, design of Web surveys is important because they are self-administered, interactive, visual, potentially multimedia, and are distributed over a wide variety of hardware and software systems. This last characteristic is especially important because the most carefully laid out design can appear quite different depending on the respondent’s hardware and software configurations.
Therefore, if a survey uses multiple data collection modes, survey managers need to ensure that comparable data are being collected using the different modes. Since federal establishment surveys deal largely with reports of factual information, some survey managers may discount research findings on multi-modal differences, because these studies have dealt primarily with attitude questions or question formats not typically used in establishment surveys. However, caution is warranted. Assuming that different data collection modes do not affect the reporting or accuracy of establishment data may be a questionable hypothesis until the necessary research is done.
GUI and HTML Standards. Internal Bureau of Labor Statistics document.
Workbook for JPSM seminar in Web Survey Design, February 18-19, 2003.
2.2.1 Creating a Unique Design for the Web vs. Reproducing the Paper Form Some survey managers make the immediate assumption that the best Web design when converting a paper form to the Web is one that simply adopts an electronic copy of the paper form already in use.
The argument for this approach is that respondents who are already familiar with the paper form will transfer their knowledge of the paper form to the Web version of the form and, therefore, have little difficulty completing the Web version. Also, it may be tempting to believe that using an electronic copy of the paper form will result in similar data collection results across all collection modes. However, as mentioned above, the representation of the form may be affected by the respondent’s hardware and software configurations. At a minimum, a computer screen and a piece of paper are very different types of displays and may require different types of behaviors from the respondent.
The “direct copy” approach would seem to work best when the form is fairly simple, it can be displayed with little or no scrolling, and screen display concerns have been addressed. Surveys that are longer and more complex often need a different interface for the Web version to avoid usability concerns. These surveys can also take advantage of automated skip patterns and edits to streamline the respondent’s effort.
Another concern is that the direct copy approach may discourage Web reporting. If respondents are completing exactly the same form, they might wonder why they should expend the additional effort necessary to enter data on a computer, which requires the additional step of signing or logging on.
Since the Web and paper are two different modes, they each have their own advantages, which should be exploited. For example, paper allows more of the survey to appear on a single page, and affords more flexibility in layout and formatting. The Web allows you to walk respondents through the process using automated skip patterns, exposing them only to the relevant parts of the survey, and also providing validation checks, where appropriate. Our experience at BLS has been that program managers prefer to start with the “direct copy” approach, but then once they see the actual product, readily make the transition to designs that take better advantage of the computer.
2.3 Security and Confidentiality on the Web
Our gateway requires identical log-on procedures for all surveys, but two security options are offered:
(1) Personal ID Number (PIN) and password or (2) digital certificate. A digital certificate offers a higher degree of security, but is somewhat complicated for respondents to obtain. Initially, digital certificates were confusing to users, but after usability testing and a change of vendors, the process was simplified substantially.
Although easier to use, the PIN & password approach also presents possible difficulties. The log-on information must be sent to respondents, which, in itself, presents some security concerns. Existing security requirements also demand the creation of a fairly complicated permanent password (it must meet multiple criteria) that many users are not used to, and which many find confusing. Finally, respondents must be able to recall permanent passwords for future access to the system. To help minimize confusion with temporary passwords, we have found that it helps to provide passwords that do not contain 0 (zero) or o (oh), or 1 (one), l (el), or I (eye), as they may be difficult to differentiate.
Although necessary to protect respondents’ confidentiality, Web security procedures introduce an additional hurdle compared to other response modes. In addition to increasing respondent burden, the net impact of security procedures associated with Web reporting is that these gateway functions will increase operational demands on the surveys and require a larger support or help staff. For example, Web reporting for the Current Employment Statistics survey generates ‘trouble tickets’ from about 15 percent of the sample each month, versus about 4 percent for the long-established touchtone data entry help desk.3
2.4 Validation Checks Obviously, paper forms lack any type of validation checks or edits. Therefore, one might assume that any editing done in a Web form would automatically result in improved data quality, as well as save money by reducing the number of follow-up phone calls. On the other hand, a delicate balance exists between the survey designer’s need for the highest possible data quality and the burden imposed on a respondent when trying to respond to edits. If the scale tips too far, the overuse or improper use of edits could lead to frustration, increased burden, and either possible premature exits from the survey or refusals to report in the future. What is important to keep in mind is that edits are critical to the overall design and should not be viewed an afterthought to be dealt with as a last step in the design process.
Although the use of some edits may seem perfectly justified, another issue concerns their enforceability.
Surveys use both hard and soft edits to distinguish between required and recommended changes. If a hard edit is triggered, respondents must address the problem to continue. On the other hand, if a soft edit is triggered, respondents are notified that there may be a problem, but they are not required to make any changes. A related question regarding edits in Web surveys is when they should be used.
Possibilities include (1) immediately after an entry is made, (2) after a table (grid) of entries is completed, (3) after a complete screen of entries, or (4) at the very end of a survey, when the respondent submits the data. Each option imposes different demands on the respondent.
Edits can be implemented in several different ways. For example, the edit message could appear in a separate window (pop-up box), as text next to the entry field, or on a separate page. A common problem when edit messages are displayed on the same screen is that respondents may fail to see them, even when different color text is used. When this happens, respondents think they either failed to click a button properly or that the same screen has redisplayed in error, so they simply click Continue again. In general, it is usually better to let respondents know about problems or potential problems as soon as possible. However, some edits can only be run when respondents indicate that they are finished, such as checks for consistent data across multiple entries.
Because there is a lack of research that addresses the general issues of how and when to use survey edits, there is no ideal solution at this time. However, some general guidelines may be helpful. For example, to be useful, edits must be noticed, read, understood, and then acted upon. Moreover, they cannot be overly burdensome. With these common sense goals in mind, the following general design guidelines
• Take steps to ensure that edit messages are noticed (e.g., through good screen design).
• Use plain English (avoid jargon), and keep the explanatory message as brief as possible.
• Give control to users. Allow them to either change the answer or leave it as is, and to move on when ready.
• Consider offering a comment box, so the respondent can explain the entry.
• Err on the side of introducing too few edits into the initial Web survey. Study the resulting data and then gradually introduce edits into future releases to see if data quality issues are addressed.
Personal communication with Richard Rosen, Program Manager for the Current Employment Statistics program.