Testing the Questionnaire

Just because your questionnaire compiles, that doesn’t mean you’re ready to start interviewing. There are several things you need to do to ensure a good questionnaire.

Testing Checklist

Check your specs

Look back over your specs. Recheck your logic, look for typos and inconsistencies. Checking these items on a printed listing makes it easier to find problems than looking at one screen full of your specs at a time on your terminal. You can also check off items as you verify their correctness. If you are working in column-free mode, hardcode the data locations. If you are using TEXT questions, make sure you have defined a TEXT_START on your header statement, and that there is a buffer of blank columns between the last non- TEXT column used and the TEXT area.


If your spec doesn’t compile successfully, you have an obvious place to devote your efforts. But even if you have a good compile there are still things to check. Look at all the messages and warnings that PREPARE produces. If you don’t understand one, spend time on it until you do. Whenever PREPARE prints a warning, it is really saying “I hope you know what you’re doing” or “You really should have done this another way, but I’ll help you out”.

Screen check

Run Survent and look at the screens. Are they readable? Are they consistent? Are skips flowing correctly? Is there any text that doesn’t stay on the screen long enough for the interviewer to read it?

Data column usage

You also need to think about whether the data is correct. One consideration is whether you’ve used the same data location for more than one question. You can easily find this out by checking the CHK listing.

Random Data Generation

Another concern with the data is whether you’ve written any questions that never get asked because of bad skip patterns. It’s difficult to check every skip pattern yourself. See the next section, Random Data Generation, for a way to help you. You could run HOLE or FREQ on the data file generated and look for blank columns.

Final thoughts

As you can see, there is some overlap in the checking done in the items listed above. All these steps do not have to be done, nor do they have to be done in the order listed. But you do need to do something that accomplishes the purpose of each item.

Once you are more experienced with the program, you will have a better idea of what steps you need to take. The most important thing to remember is to always test your questionnaire! For best results, have many people test the questionnaire – even after what seem to be small changes. It’s better to be safe than sorry.

Random Data Generation

This option is used by testers to generate responses to questions and make sure that all the necessary skip patterns are being followed, or to produce demonstration data sets or reports. This can be used on any questionnaire. Use the command RDG and its associated options to turn on random data generation.

The syntax for RDG is:

!RDG option,option

Here are the options you can choose from. They can be specified in any order and are separated by a comma or space(s). If no options are specified, you will be prompted for what you want, with the prompt showing the default in parentheses.

Option Description
SHOWANSWERSONLY Display answers only, and no question text.
DEMO Waits for a keystroke after each answer is shown.
NCASES=#            Number of cases to initiate.
PAUSE=#             The delay after each answer, specified in seconds.
SEED=#              Allows you to reproduce the same set of answers at another time.
BACKUP Allows you to reproduce the same set of answers at another time.
SHOWQUESTIONS Displays questions and answers on the screen.

NOTE1: You must have debug capability (DBUG interviewer ID or D in EMPLOYEE.XXX file) (see Employee Information File, for more information on this) in order to use the RDG option. This is so interviewers will not mistakenly use this option.

NOTE2: Specifying RDG HELP at the in-between interview prompt will bring up a small help screen.

Here is a sample procedure:

  • Enter SURVENT to load the program.
  • At the Return to interview prompt enter:
  • RDG (plus other options you want to set)
  • <Enter>

The interviews will begin and the program will place you at the Survent interview prompt when the interviews are completed. You could then use the utilities to produce reports, and check the counts for proper values, before starting actual interviewing. If you run HOLE, look for blank fields; they may indicate questions that are never asked.

By setting a seed for random data generation you will be able to reproduce the same set of answers in another run.

If a Survent blow error occurs, the data generation will stop, and a separate data file will be produced containing the error. The data file will be put in a <study>.B_ subdirectory. A message will also print explaining the error and the question it occurred on. To debug the problem, you can use Survent’s View option on the data file containing the error up to the point of the problem, and then fix the problem in the PREPARE specification file. If a Survent blow error occurs during random data generation, you will also see the seed of the current case. If after changing your specs you would like to test the blown case, start up again and specify the seed reported earlier by the program.

The number of interviews you actually get in your data file may be less than the number you asked for if you are aborting (!SPECIAL,ABORT_INTERVIEW) interviews in your specs. Also see USING FUNCTIONS IN CONDITION STATEMENTS, Conditional Functions for information on the DATAGEN function, which will exclude blocks of questions from being randomly generated. VARIABLE type questions will have the alphabet (and, if enough characters are needed, punctuation characters) entered in the data—if you see the alphabet showing up in your real data, you may have forgotten to purge (or rename) your data file before you started collecting real data. “One line of randomly generated text” is generated for each TEXT question that is asked. You will see this line in the data file when TEXT question locations are displayed with the DT command in CLEANER or the LIST utility program.

If you want to stop RDG while it is still running, press Ctrl-Y (Ctrl-BREAK in UNIX). You will be prompted as to what you want to do.

Your answer choices are:

  • RDG to continue
  • RDB # to do # more questions in the current interview, then stop and wait for you to tell it what to do next
  • Q to quit

Weighting Responses

This option allows you to weight each response in a FIELD question to control how often the random data generator will choose a particular response. Since a weight must be specified for each item in the response list, this option is recommended for questions with short lists. This option would most likely be used to get the random data generator through the screener portion of the interview.

The syntax for weighting responses is:

!RDG #,#,…,#


  • Each # must be separated by a comma.
  • Weights must be a positive number from 0 – 99.
  • The number of weights must be equal to the number of responses.
    • Including any Rotate, Endrotate, Group, and Fix statements
    • Including any = comment lines
    • Give these a weight of 0 as a placeholder
  • Response weights must add up to 100.

NOTE: !RDG can come before or after any !IF condition statement, but must come after the question label line and before the question text.


!IF AGE>18
!RDG 90,9,1
Have you or any member of your family purchased any candy in the last month?
1 Yes, Have purchased
2 No, Have not purchased
9 Refused }

In this example we have weighted the responses such that the Yes response will be chosen approximately 90% of the time, the No response 9%, while Refused would rarely be chosen. In this case a Yes response is needed to continue in the questionnaire.

Skipping Questions

Should you want to skip certain questions when generating random data, you can use the DATAGEN function. Using this function on an IF condition attached to a GOTO question would let you skip around screener questions, quota checks, or other question types that you choose to treat differently. (See USING FUNCTIONS IN IF CONDITION STATEMENTS for more information on the DATAGEN function.)

Multiple Response Questions

The random data generator employs a formula to determine how often it will generate more than one response on multiple-response FIELD question. You can weight the responses so that one or more would be chosen more often when multiple responses are generated.


!RDG 70,10,10,9,1
Which of these credit cards to you own?
1 General Purpose Cards
2 Bank Cards
3 Retail Store Cards
4 Gas/Oil Company Cards
(-)9 None of the above }

In this example, GENERAL PURPOSE CARDS is weighted to be chosen approximately 70% of the time. This means it would account for about 70% of the total responses given to this question. Response NONE OF THE ABOVE would be a rare choice (1% of the time).

StartRDG and RDGDelay

The commands “STARTRDG” and “RDGDELAY” can be used to start “background” interviewers in Random Data Generation mode at different speeds on different studies. This enables you to do full “load” testing on your system without having to start up individual stations.

To use these commands, first specify “RDGDELAY #” where # is the number of seconds between each question. The default is 20 seconds. Then specify “STARTRDG – ” to start interviewers; note that this uses the same syntax as the “start” command.

Smart Random Data Generation

SMART_RDG commands are an extension of the Survent RDG (Random Data Generation) function. They allow you to check answers across multiple questions and force them to have particular responses (if random filling of the questions does not return the value you need) so the RDG run can continue.

SMART_RDG is a compiler directive that can be used outside of a question. RDG mode previously allowed no control of the responses across multiple questions, just the limits for a particular question. For example, if you were trying to continue only if the sum of a set of questions was 100 (eg. adds to 100%), it could seldom generate the data such that the values added to 100. This can lead to an infinite loop during the RDG run and cause the run to fail.

SMART_RDG commands first attempt random data entry for the number of tries specified, then fills the value(s) based on your criteria.

WebSurvent and WebCATI will not execute SMART_RDG statements. So, even if you are programming a web survey, you must use terminal mode Survent when testing using RDG and SMART_RDG. This can be done using “netsurv” to execute the interview (Netsurv is also covered in this chapter).

Programming Issues

SMART_RDG commands require RESET statements to be used for backend checks instead of GOTO statements, since GOTO statements do not clear previous data, a requirement for SMART_RDG. However, most programmers with web surveys use the GOTO statement for backend checks, as they are more user friendly for respondents (previous answers are maintained).

To accommodate both SMART_RDG and a user-friendly program design, you can use a couple of techniques. The first involves using conditional DATAGEN() functions so that the RESET will be executed in RDG mode and the GOTO will be executed in standard interviewing mode.


''An RDG interview
{!IF DATAGEN() AND (X(Q1A)+X(Q1B)+X(Q1C)<>100)

''A live interview
{!IF NOT(DATAGEN() AND (X(Q1A)+X(Q1B)+X(Q1C)<>100))

The second technique below uses an >IF_DEFINE statement (See META Commands, Define Meta Commands for more information) to choose between the GOTO and RESET at compile time:

{!IF (X(Q1A)+X(Q1B)+X(Q1C)<>100)

NOTE: In this case you need a command like >DEFINE @RDG_RUN at the top of the questionnaire whenever you compile a version using RDG, and would have to comment that statement out (”>DEFINE @RDG_RUN”) and re-compile for live interviewing.

!SMART_RDG starts a SMART_RDG block and is in effect until another !SMART_RDG statement is encountered.

The syntax of a SMART_RDG statement is:

!SMART_RDG Keyword(<Value>,Question List) <TRIES=##>
!SMART_RDG Keyword(Question List)=Value <TRIES=##>


This disables or resets previous SMART_RDG statements. It is recommended that you use SMART_RDG FREE after each SMART_RDG function has finished.


Each keyword is specific to a type of check commonly used in Survent. For example, a check that requires multiple numeric (!NUMERIC) questions to equal a specific number would require the CONSTANT_SUM() keyword.

  Question List

This is a list of question labels which will be used with the assigned keyword in order to meet the necessary conditions. There is a 50 label limit, but asterisks (*) can be used as wildcards for questions with like-named labels.


{!SMART_RDG Keyword(QLBL*) }

The first example would use labels QLBL1, QLBL2, QLBL3. The second example would use all labels that begin with QLBL.


This is the number of random attempts to make before the SMART_RDG statement sets the value as specified.



In this example, SMART_RDG will enter random data in the QL0 series of questions ten times. After the tenth attempt, the CONSTANT_SUM keyword will be implemented, and the data generator will answer the QL0 series so that the questions in the QUESTION LIST add to 100.

If TRIES is not specified, SMART_RDG immediately generates the data to match the keyword criteria.

Keyword Functionality


This keyword checks that a group of numeric (!NUMERIC) questions add up to a certain value.

The syntax for the CONSTANT_SUM functionality is:

SMART_RDG CONSTANT_SUM(Question List)=<Label or ##> TRIES=##




In the examples above, after the specified number of TRIES using random filling of the questions, it will force Q1, Q2, and Q3 to add to value assigned. Notice the value to assign can be either a specific number or the value of a previous label.


This keyword deals with checks that require a unique rank or code across all available FIELD questions. If there are duplicates, it should be an error in this case.

The syntax for the RANK functionality is:

SMART_RDG RANK(Question List) TRIES=##



The example above tries to get unique values randomly for Q1, Q2, and Q3.  Then after the specified number of TRIES it supplies a unique set of values.


This keyword deals with checks that require a unique rank or code for some, but not for all of the FIELD questions. It is useful for rank questions with a limited number of ranks out of a set (top 3, bottom 3, etc.)

The syntax for the RANKTOP functionality is:

SMART_RDG RANKTOP(<# to rank>,<Question List>), TRIES=##



The above example will randomly insert unique responses between 1 and the # TO RANK across NUMERIC or FIELD questions in Q1, Q2, Q3, Q4, and Q5, and leave others blank. Questions or grids must use B subtype to allow for blanks in data.


This keyword will force unique rankings for each question, but also allows for a non-unique exception code. This function assumes the last response item in the response table in the exception, unless Don’t Know Other Response (DKOR) is specified.

The syntax for the RANK_WITH_DONT_KNOW functionality is:





The above examples supply a unique rank for Q1, Q2, Q3, and Q4, and allows one non-unique exception code for the last response item in the FIELD.


This keyword will force RDG to enter a specific answer for all specified questions.

The syntax for the ONLY_ANSWER functionality is:

SMART_RDG ONLY_ANSWER(Answer,Question List) TRIES=##)



The answer can be a number or a response code.


This keyword will always meet the Other Specify criteria in a FIELD question. When used, this will always select the other response code as an answer choice, so ‘checktext’ backends are always met.

The syntax for the OTHER functionality is:




Since RDG will always fill a VARIABLE or TEXT question with data unless it has a condition, this function will also always select the other response code, so back-end checks that enforce the FIELD to VARIABLE/TEXT relationship are met.

Tracing Problems

Using the Tracing Dump Switches

To trace interview data or track test conditions, use DUMP:n#” on the Survent command line, or ”>DUMP n#” inside Survent if you are in Debug mode. This will enable you to use the following interview tracing features:

  • n1 – The label and number of the next question is shown and it tells you whether it will execute the next question based on the condition. This allows you to check skip patterns.
  • n2 – This halts the display so you can see dump information (but not wanted if in random data generation hunting bugs)
  • n7 – This prompts you for a set of columns to display, then shows you the data for those columns over and over until you turn it off or pick another set of columns to display.
  • n8 – This shows the data for each question including its data position and width before the question is asked.

Use >DUMP n to get all tracing features turned on, and >DUMP –n to turn them off.

Logging Interviewer Responses

Sometimes when tracing problems, it is useful to get an ASCII log of all the interviewer commands.

Use the LOG command at the <Return to Interview> prompt in Survent to do this.

In some cases you need to save the logged responses immediately after each question. This is particularly true if the program is blowing up and you don’t know why. In this case, you may use the LOGDEBUG command, which tells Survent to save the log file after every question.

When the questionnaire blows up, you can read the log file to see what may have caused the problem. The file is save as LOG<intv_id> in the CFMC/INTVR_LOGS directory.

Note that when a questionnaire gets a BLOW error, it automatically logs that interview to the interviewer’s LOG file, even if logging was not turned on. This is one place you can always go to see what responses were entered and try to fix the problem.

Fixing Blow Errors and Viewing Blow Files

Sometimes your questionnaire design is incorrect such that certain responses produce a BLOW error which causes the interview to be aborted and the data to be saved in an alternate data file in the BLOW file directory. The program will note the ERROR #, a text message, the question where the error occurred, and the name of the file it saved.

To get more information on what the error means, see Blow File Errors. The most typical blow error is ERROR # 108 where you use FIELD,USE_PREVIOUS_ANSWER subtypes that have not properly had their data filled in before being executed.

It is useful to review the responses to see what may have caused the BLOW error. You can do this by VIEWing the BLOW file. How to do this is slightly different on each system:

  • DOS – Specify the name of the data file to view on the configuration screen; you must be in the blow file directory for the study (usually \cfmc\data\<study>.b_).
  • UNIX – Specify <study>,BLOW=<filename> at the “Type questionnaire filename” prompt. The program will find the file in the blow directory for the study no matter where you are logged in if the CFMCDATA variable is set.

You will then be placed in VIEW mode on that file. Walk through the questionnaire to the place it blew up. If you’d like, you can turn on the >DUMP n switches to see the results of conditions leading to the BLOW to help trace the problem.