1

What constitutes social work practice differs greatly across the spectrum from micro, to mezzo to macro. One thing is the same across all of these settings, though. After conducting a careful assessment, all social workers should structure their intervention plans by co-creating strengths-based goals and measurable objectives with their clients.

Goals and objectives belong together, but they are not the same thing. Think of goals as a broad “wish” area, whereas objectives are trackable elements that create the larger goal. For a mental health social worker, a common client goal might be “improvement of mental health symptoms” whereas objectives might include level of anxiety or frequency of irritability. Taken together, information on objectives build up to represent the concept of the goal.

As part of this targeting or goal-setting process, we move from the vague idea, to an operational or working definition that becomes a goal. From there, we split the goal out into measurable objectives. As we work on this process with our client, we want to lead with what the client presents as most important in the moment, but we also want to think about the area that the client has the greatest likelihood of changing. Further, this goal should be relatively specific, with resources that readily allow you to work on it together. You also want to think about the goals and objectives that have the greatest chance of producing negative consequences if the item is not addressed.

Central to the process of evidence-based practice as a process is understanding whether client goals and objectives are achieved over time. This is in no small part due to the productivity-oriented funding environment we practice in today. Increasingly, for example, social workers are required to document whether their clients’ goals and objectives are met in order to continue receiving payment for their services.

But beyond that market-driven need for data tracking, it is also worth saying that observing data over time in a structured and organized way can be extremely useful to a social worker in client-related decision making. Remember, unless you can measure a problem, you cannot effectively evaluate how well you are resolving it!

Now, many social workers have a hard time figuring out the difference between goals and objectives, so let’s spend some time talking about that through the use of a micro, clinical example. Goals can be thought of as a broad wish, such as the desire to reduce depression. In order to accomplish the goal of helping a client reduce their depressive symptoms, it must be broken down into measurable objectives that together make up the concept of depression.

Objectives are very tangible, measurable pieces of larger wishes. For example, an individual objective underneath the wish to reduce depression might be to increase involvement with friends. If we track involvement with friends alone, though, this does not capture the full concept of depression. Therefore, we would also want to collect information on a client’s level of anxiety, mood state or presentation of suicidality. In measuring these concepts as objectives, you would want to make sure that your measures were valid and reliable. In order to do so, you would search for existing, standardized” (scientifically tested) measures wherever possible, and if not, in a last resort situation, use unstandardized approaches or behavioral measures. See Figure 1.1.

Figure 1.1  A goal and its objectives

In explaining goals and objectives, let us not rely only on a micro, or clinical example involving one client, let’s think about a larger client system. Let’s say your home city wants to improve citizens’ feelings of community pride. Community pride would be the city’s goal. Now, let’s say the city hired you to implement a plan to achieve this goal. You would start by working with all stakeholders related to the project, in order to make sure everyone is on the same page with the goal and measurable objectives.

Figure 1.2 A clinical goal and its objectives

As part of this process, you would want to break down the concept of community pride by establishing a set of measurable objectives that were acceptable to your stakeholders. For example, you might want to gather information on whether people intended to continue their residence in the community and whether they felt pride in the local park. Just as in the micro practice example, you would search for valid and reliable measures of those concepts.

For a measure to be valid, it must accurately measure the concept that is in question. Evaluators refer to this as construct validity, or the idea that a given measure is an accurate representation of the concept that is trying to be measured. Yes, this sounds like an awfully “researchy language” word – but it is just a word we use to identify a particular type of problem with a measure.

For an example of a problem with construct validity, let’s recall that the famous sociologist Max Weber defined socioeconomic status as a composite of information about education, occupation and income. If we proposed to measure socioeconomic status through a measure of income alone, we would have a threat to construct validity. Generally, using existing, standardized measures created by clinical evaluators and researchers usually gives you a better chance at having construct validity.

Additionally, for a measure to be valid, it must accurately include all the aspects of the concept that is in question. Evaluators refer to this as content validity, or the idea that a given measure is an accurate representation of the concept that is trying to be measured based on its parts. For example, if we said we would measure socioeconomic status as income alone, without considering occupation and education data, that would represent a threat to content validity as not all the parts, or aspects of the concept would be part of the measurement.

Using a standardized measure also allows you to better compare what you are doing with clients to what others have done with clients in different settings. Although many standardized measures are available through academic databases that social workers will not always have access to after their graduation from graduate school, there are a range of publicly available options as well.

Now, let’s not forget about reliability. For a measure to be reliable, it needs to be consistent across measurement attempts and easy to collect. The key word to remember about reliability is consistency. As long as reliability testing is good, you are good to go. One standard measure of reliability is the score of the statistical test known as the Chronbach’s alpha. Don’t be afraid of the techs-sounding language. If the result of this statistical test is with the range of 0.7 to 0.9, this indicates that a measure is reliable.

So, now that we’ve addressed the validity and reliability of meaures used in evaluation, let’s get back to thinking about goals and objectives. We’ve gone through a micro and a macro example of goal and objective setting, but we also want you to know that there’s more to practice evaluation than just a focus on the client or client system. Practice evaluation also involves consideration of how the social worker is doing at implementing (putting into place) the intervention in question. Implementation information, or information about how the intervention is being put into place, is useful for professional growth and can be discussed in supervision in order to skill-build.

In this scenario, the goal is the ideal delivery of an intervention technique, which will ideally produce an effective result for the client. The objectives relate to the consideration of whether the parts of the intervention are used appropriately by the social worker. Let’s think some more about this.

When learning a clinical intervention technique, a social worker might go to a lecture or read a book to learn the approach, right? But what happens when the social worker goes to implement the new technique in real life? Are they doing it correctly some of the time, all of the time or none of the time?

The goal here is referred to as treatment fidelity, whether the social worker’s practice has fidelity to the intervention’s approach, theory, method and procedures. Treatment fidelity is a critical component of successful program implementation. Two key strategies for fostering treatment fidelity are the use of both one-time and ongoing training as well as ongoing mentoring and supervision. With treatment fidelity comes better client outcomes. Practice evaluation can help someone engage in continuous reflectivity and reflexivity toward growth and learning in the ideal use of the intervention technique.

Interventions to influence behavior of social work professionals for fostering treatment fidelity:

  • Dissemination of educational materials
  • Educational outreach visits
  • Use of local opinion leaders
  • Audit and feedback techniques
  • Use of computers
  • Mass media campaigns
  • Continuous quality improvement (CQI) programs

(adapted from Gira, Kessler & Poertner, 2004)

Let’s say a child welfare social worker is involved in supporting a family’s efforts to better parent their children through the use of the clinical technique known as motivational interviewing. Motivational interviewing is a specific counseling technique that has a theory, and specific parts. Our child welfare social worker needs to know if she is applying the theory and using the parts according to the original evidence-supported model. She invites her supervisor to her next home visit with the family in question and asks her to fill out a questionnaire about her practice during the visit. This treatment fidelity evaluation can be used to inform practice – namely to identify areas in need of improvement and to reinforce areas of strength in using motivational interviewing. The Behaviour Change Counseling Index (BECCI) is a tool that can be used by a social work clinician and/or their supervisor to determine. The BECCI manual is located at: https://motivationalinterviewing.org/becci-manual

In summary, all social work practice requires the use of goals and objectives in order to adhere to our ethical responsibility to evaluate practice. We start with broadly-defined goals and move to a set of specific and measurable objectives that together make up the spirit of the goal. Goals and objectives are not only used in micro and macro practice, but also in the practitioner’s own consideration of how they implement intervention techniques and whether they demonstrate treatment fidelity. Understanding goals and objectives is key to the conduct of practice evaluation.

Discussion questions for chapter 1:

  • Social workers need to conduct practice evaluation focused on two phases of intervention, what are they?
  • Learning science shows us that having students explain concepts to themselves in their own words helps them to grapple with difficult concepts. In your own words, how would you describe the difference between goals and objectives?
  • Imagine that you are a macro social worker tasked with doing community organizing to improve racial justice. What might your goals and objectives be in that effort?
  • You are two years out of your MSW program, and you’ve just been promoted to a supervisory (mezzo) position at your organization. Your executive director asks you to develop a practice evaluation approach focused on supporting new social workers’ use of cognitive behavioral therapy in client sessions. What are your goals and objectives?

References for chapter 1:

Gira, E., Kessler, M., & Poertner, J. (2004, March). Influencing Social Workers to Use Research Evidence in Practice: Lessons From Medicine and the Allied Health Professions. Research on Social Work Practice, 14(2), 68-79.

Media Attributions

  • Figure 2 depicting goals and objectives
  • Figure depicting goals and objectives