Web surveys are widely used in marketing research. One feature of such surveys is the use of "forced answering," which requires respondents to enter an "appropriate" response before they are allowed to proceed to the next survey question. However, some survey researchers warn that forced answering should not be used as it could increase nonresponse error. These researchers suggest one way around this is to provide a "prefer not to answer" (PNA) option, which, if used, would allow respondents to continue without providing a substantive response to each question. This study examines the effects of using forced answering and "prefer not to answer" in Web surveys from a field experiment conducted with a general U.S. population. The topics studied are generally perceived to be relatively safe or innocuous. We find no evidence to support that forced answering lowers completion rates, whether or not PNA is used. Findings suggest that use of PNA evokes trade-offs between quantity versus quality of information.