6

Many practice evaluation efforts rely on the collection of quantitative survey data. Survey data can be collected via telephone, through the computer or paper. I find that paper is most common in agency settings, followed by computer-based approaches. You can read more about computer-based approaches in Leslie, Holosko and Dunlop, (2006). However, it is important to remember that not everyone has access to a computer or smartphone, and there could be data security concerns with those choices.

In this, the era of the publicly available Survey Monkey website or Google Forms program, among others, the biggest mistake that my MSW students make is thinking they can just “whip off” a survey and start collecting data to evaluate their practice. Unfortunately, this quickfire approach often leads to the collection of faulty data, the wrong data, and data that cannot be used.

The process of survey research has seven steps:

  • Establish goals and objectives and their corresponding study aims, research questions, hypotheses & measures
  • Determine who will be surveyed
  • Choose your survey methodology
  • Create your survey
  • Pre-test or pilot test the survey and fix mistakes
  • Gather survey data
  • Analyze and interpret data for your report

Step one: When thinking about doing a quantitative survey for the purpose of practice evaluation, one needs to start with clear goals and objectives for the intervention. This is as close to a recipe for evaluation as you are going to get. Putting this into a matrix format helps a lot. From goals come study aims to guide the evaluation. Under each goal are measurable objectives – one or more. From each objective comes a research question under each study aim – they move from a statement into a question. This is later followed by the development of plans for the process and/or outcome measures to be collected within those study aims and research questions. Remember, process and outcome measures need to be valid and reliable. Ideally, these measures will be standardized ones or behavioral measures if necessary. Hypotheses are also stated at this phase. Here is an example:

Clinical goal: Measurable objectives: Study aim: Research questions: Hypotheses
CG1: To lessen substance use in the study population CO1a: To reduce clients’ substance use-related problems score.

CO1b: To reduce total days of binge drinking each month

SA1: To lessen substance use in the study population: RQ1a: Will the intervention reduce clients’ substance use-related problems scores?

RQ1b: Will the intervention reduce total days of binge drinking each month?

H1a: The intervention will reduce clients’ substance use-related problems scores.

H1b: The intervention will reduce total days of binge drinking each month.

Using your research questions, identify your process and/or outcome measures. You may have other measures such as demographic or administrative variables you need to collect data on. All of this should be noted. During this process, you really need to focus on “staying on the highway” and not adding in “nice to know” questions. You want to limit the time people will take on a survey so that you get the answers to the key questions you are seeking to answer. In common parlance, one might say a survey designer should follow the K.I.S.S. principle – keep it simple stupid! In order to KISS your survey, you may want to ask yourself “what will I do with the answers to these questions?” If you have no satisfactory answer, leave the question out. Work on avoiding the temptation to add a few more questions just because you can! In fact, one technique you might try at this stage is to place the questions you intend to ask into three groups. First, determine what are the must know questions. Second, think about which questions are useful to know. Third, identify the questions that are nice to know – and then discard them!

Note: If you are interested in conducting a general client satisfaction survey for accountability purposes (as they are not a measure of quality or outcomes), there is a freely-available option that follows all of the best practices in client satisfaction survey design, see, for example, the CSQ-8 which has both child and adult versions (Royse, Thyer, Padgett, 2016). You see, there is no need to re-create the wheel! Client satisfaction surveys are almost always positive for a number of reasons: Satisfaction is not a direct measure of whether needs were met; evaluations of service can be influenced by reputation of agency; self-selection in satisfaction surveys may mean higher functioning people participate; clients may not know what quality is; non-neutral settings bias; more negative answers given to unrelated interviewers vs. clinical staff or admins; clients who have invested a lot in the process may rate more favorably despite outcomes; low response rates reduce generalizability; clients who return mailed surveys have higher levels of education (Royse, Thyer, Padgett, 2016).

Step two: Determine whether all of your clients will be surveyed, or just a sampling of the group

Step three: Once the plan for what is to be collected is set, you should begin to think about the best method for collecting information from your client population. Some situations will be best-suited to a telephone interview, whereas others necessitate pen and paper, a handheld device or email.

Are you asking people about things they wouldn’t want to talk about in public? Think about conducting practice evaluation related to amount of substance use, criminal behavior or for sex therapy, for example). In those situations, email or a privately filled-out paper survey might be ideal. Literacy – computer or otherwise – is always an important consideration at this stage.

After thinking carefully on who you will be surveying on your topic, your first choice is whether to administer the survey or have people fill it out themselves. An administered survey is one in which the evaluator asks the questions of the client, and then writes down the client’s answer. This can be done via phone or in person. If an administrated survey is not ideal, surveys can be conducted on paper or via handheld device, or email.

Step four: Based on your choice of methodology, discussed in the last step, you will begin to work on translating your process and outcome measures into survey questions. Sometimes, you will have chosen standardized measures, so you will not need to write survey questions. Sometimes you will have a mix of standardized and unstandardized questions. Remember, when using standardized questions, you cannot change the wording, or this could alter the validity and reliability of the concept you are seeking to measure.

In addition to starting with an easy-to-answer question, you are going to want to vet each question for clarity.

Are you speaking in social worker speak, or regular person speak? Do you have technical terms, professional jargon or acronyms in your questions? If so, will these be understood by all? Other considerations in question development tips include the need to use neutral language that addresses the present before the past or the future. Avoiding “double barreled” questions is also vital, so that you are only asking one question at a time. Go through your questions with these ideas in mind and fix them up as best you can. (Insert callout here: For more guidance on quantitative survey development, you can read Fanning’s 2005 article or visit Duke University’s guide: https://dism.ssri.duke.edu/survey-help/tipsheets/tipsheet-question-wording

When we think about wording, we need to start by thinking about how we are telling our survey participants to fill out the survey. This means providing instructions for how to answer each question – be it answering by circling a word, writing words or filling in a box.

This also means you need to think about your answer options, or answer structures. There are four basic approaches to answer options in survey design: multiple choice, use of a rating or agreement scale, numeric open ended or text-based open ended.

After creating your survey, you will also need to think about layout. Layout, it turns out, is key to getting people to finish a survey – and avoiding missing data, something you want to avoid. If you are using a website such as Survey Monkey or Google Forms (Add callout about Google forms owning your data), you will be navigating a fixed set of options. Getting familiar with the different types of question structures these websites offer will be important. If you are going with a web-based option, remember that what you prepare will likely look different on smart phones, tablets and laptops.

Regardless of whether you choose to use the web or paper, open space is vital to have – or as graphic designers call it, ‘white space.’ This means it is not advisable to cram a lot in with a small font. You should consider dividing long surveys into sections on the same topics. Avoid drop-down menus on web surveys. Make the size of open-ended answer boxes smaller rather than larger. Be sure to choose the use of the progress bar on web surveys, so people know how far along they are. And think carefully before requiring an answer to an online question, as this can turn survey participants off.

There is a whole science to survey design, with one research team even studying the impact of stamp placement on an envelope as it related to survey return – you can’t make this stuff up! (Insert callout here with bibliography on survey design research).

Step five: Once your survey masterpiece is complete, you will need to pilot or pre-test your survey with your ideal population (ideally not including people who will participate in the actual practice evaluation). Inevitably, you are going to get feedback about unclear questions, odd question orders, confusing graphics and the like. Take this information in stride, it will only help you to perfect your survey for when you go live with your clients.

Step six: At this point, you are ready to gather your survey data, but this entails more than just handing out surveys. Remember those survey design researchers? They have determined that the best way to get responses to a survey involves notifying people that a survey will be given out, then later giving the survey out, and following that up with a reminder that the survey was sent out. This can be done verbally, in email or in letter form.

Each bit of outreach to your clients should be personalized, beware the form letter. Think of this as another form of client engagement. In this case, client engagement is a means to an end regarding obtaining the data you need for your practice evaluation to be complete.

Step seven: Soon, your data will start to come in. If you are using a website, your material will be stored online. When you are done collecting data, you will be able to download your data for analysis or analyze your data through the website. If you are using paper surveys, as your survey responses start to roll in, be sure to number each one with a unique identifier and enter your data into a spreadsheet.

Your spreadsheet will be structured as follows. In the first row, you will list the unique identifiers of the surveys. Along the top row, starting with the second column, you will list each question that you ask, with one question per column. You will then match the unique identifier’s answer with a column as you go through data entry. Words should be converted to numeric codes. Often, a ‘no’ answer is coded as a zero and a ‘yes’ answer is coded as a 1. See figure 6.1 for an example of what a spreadsheet looks like.

Client #

Question 1 answer

Question 2 answer

Question 3 answer

Question 4 answer

1

2

3

4

Figure 6.1: Example of data entry spreadsheet

At this point, you will be ready to analyze your survey results and interpret your findings in order to write up your report. We will talk more about this part of the process later on in this primer.

Discussion questions for chapter 6

  • Name three best practices in quantitative survey design.
  • In creating your survey, what are the two major areas of concern?
  • Explain why outcome measures need to be valid and reliable.
  • What is the best way to approach data collection with a survey?

References for chapter 6

Fanning, E. (2005). Formatting a paper-based survey questionnaire: Best practices. Practical Assessment, Research & Evaluation. 10(12), 1-14.

Leslie, D., Holosko, M. and Dunlop, J. (2006). Using information technology in planning program evaluation. Journal of Evidence-Based Social Work. 3(3/4), 73-90.