9

As professional social workers committed to practicing under our Code of Ethics, we must not only evaluate our work, but we must do so according to a set of guidelines for this practice. This starts with a consideration of whether review by what is known as an Institutional Review Board (IRB) or Human Subjects Committee is needed. An IRB is a committee of experts and community members who review research and evaluation proposals to identify any potential ethical challenges before the start of the project in question.

Technically, practice evaluation that is designed for internal consumption only (i.e. within a social worker’s organization) does not require review by such a board. A majority of practice evaluations will fall in that category. However, if you are interested in publishing your results for the public or presenting your results at a conference, a review by an IRB is required. While many organizations do not have an IRB, they can partner with a local university in order to access one, or they can hire a non-university-affiliated one for a short-term period.

Regardless of whether IRB review is required or not, social workers engaging in practice evaluation should follow ethical guidelines in their work. Here, I discuss four topics that you should address in preparing your work (Royse, Thyer & Padgett, 2016). There are many articles and books that provide a more in-depth discussion about research ethics, but this chapter is meant as a general overview of the key elements of ethical practice in evaluation. Additional readings on this topic are located in the reference section for this chapter.

First, evaluations must involve voluntary participation. Second, sufficient information about the nature of the evaluation must be shared with participants. Third, participants must understand that no harm will come as a result of being part of the evaluation. Fourth, sensitive information about participants must be protected.

Evaluation participants must be involved on a voluntary basis (this does not exclude the use of an honorarium). Nobody can be forced to participate in an evaluation and no harm can come as a result of participating in one. With respect to this guideline, it is important to consider who approaches the client about participation.

If a client’s therapist invites them to participate in the evaluation, this might feel like pressure or coercion to the client. The same would hold true if a parole officer asked this of a parolee, for example. Evaluation participants’ right to self-determination is of paramount importance. A participant’s choice to step out of an evaluation at any point in the process must be respected.

In order to participate in a practice evaluation, participants must be competent to understand the choice before them and must be aged 18 or over. These participants must consent to taking part in the evaluation. If a participant is not aged 18 or above, you must obtain parental/caregiver permission (and still have verbal assent from the young participant).

If you are planning to conduct practice evaluation with human participants, you will need to use a either a disclosure or a consent form. These items are required in order to provide people with a whole understanding of their rights, participation risks and benefits, and a general explanation of what is to be asked of participants.

A disclosure form is used for internet or telephone surveys, for example, where obtaining a signed consent form would be impossible. Otherwise, a consent form is used. For an example of a consent form, see page 57.

In practice evaluation projects, evaluators can promise clients that their answers to questions will be confidential, but not anonymous. Confidentiality and anonymity are not the same thing and should not be used interchangeably. If your project is confidential, it cannot, by definition be anonymous. And likewise, if your project is anonymous, it can’t be confidential.

Providing anonymity for information collected from evaluation participants requires that either the project does not collect identifying information (e.g., name, address, email address, etc.), or the project cannot link individual answers with specific participants. Ideally, personal information should not be required unless it is absolutely necessary to the project. Anonymity is not able to be guaranteed if personally identifiable information will be gathered by evaluators.

(This will be placed in a callout) Examples of Personally Identifiable Information (PII):

Name

Addresses

Employer’s name or address

Relatives’ names or addresses

Date (e.g., birthdate, date of death, etc.)

Phone / fax numbers

E-mail addresses

Social security numbers

Member / account numbers

Voiceprints

Fingerprints

Full face photos & comparable images

Confidentiality, on the other hand, of information collected from evaluation participants means that only the evaluators can identify the responses of client participants. Either way, the researchers must make all efforts to keep anyone outside of the evaluation team from making connections between individuals and their responses.

Approaches to increasing the level of confidentiality in an evaluation include but are not limited to the following six strategies. First, evaluators should use alphanumeric codes on evaluation documents (e.g., completed surveys, interview transcripts) instead of recording identifying information on those documents. This requires keeping a separate document that links the code to participants’ identifying information locked in a separate location with restricted access

Second, evaluators can encrypt any identifiable data they collect. Third, when using quantitative surveys, evaluators can remove face or cover sheets containing identifiers (e.g., names and addresses) from surveys containing data after receiving them back from participants.

Fourth, evaluators can pay close attention to the need to properly dispose, destroy, or delete study data or related documents in a timely manner. Fifth, by limiting access to identifiable information, evaluators can protect against a data breach. Sixth, evaluators can work to securely store data documents within locked locations and/or create security codes for computerized data. Informed consent documents should also be kept separately from clients’ evaluation data.

While not an ethical obligation, evaluators should always report back to evaluation participants on the findings of their evaluation. Not only does this allow clients to understand more about the efficacy of the intervention they have participated in, it also shows respect to people who have volunteered their time to help the organization in this way.

A note on respect for culture in evaluation

Culture is an important element, or set of elements, in practice evaluation. According to the Centers for Disease Control and Prevention,

“When we conduct an evaluation, everything we
do reflects our own cultural values and perspectives—from the evaluation purpose, the questions we develop, and the methodologies we select to our interpretation of the findings and the recommendations we make based on those findings. Because culture is influenced by many characteristics (i.e., race, ethnicity, language, gender, age, religion, sexual orientation, education, and experience), it is important that we stop and reflect on our own culture before embarking on an evaluation. To conduct culturally competent evaluations, we must learn and appreciate each program’s cultural context and acknowledge that we may view and interpret the world differently from many evaluation stakeholders (CDC, 2014, 3)”

Social work evaluators should strive to be culturally competent, culturally humble and culturally responsive evaluators. It is important to note that these terms mean different things, despite the fact that people may use them interchangeably!

Guidance from the American Evaluation Association to evaluators suggests that they should draw “upon a wide range of evaluation theories and methods to design and carry out an evaluation that is optimally matched to the (cultural) context. In constructing a model or theory of how the evaluation operates, the evaluator” should reflect “the diverse values and perspectives of key stakeholder groups” (American Evaluation Association, 2011).

This last statement addresses cultural competence, which may be thought of as a knowledge state. This concept is defined by the Department of Health and Human Services (HHS) as “understanding the core needs of your target audience and designing services and materials to meet those needs strategically. It is important to regularly and honestly evaluate your organizational and operational practices to ensure all voices are heard and reflected” (HHS, 2019).

Being a culturally competent evaluator can be thought of as both seeking knowledge and retaining knowledge related to the diversity of clients or staff in an agency in a manner that assists the evaluation process. One example of the need for cultural competency relates to how evaluation data are collected. Let’s say a validated data collection instrument focused on understanding eating patterns for a heart disease prevention intervention. This instrument was chosen by an evaluator for data collection before, during and after data collection. Some of the questions in the instrument addressed sensitive topics, such as cultural eating practices and cultural perceptions of body image, etc. (CDC, 2014). Consideration of whether evaluation participants could be offended by questions viewed as stereotypes would need to be engaged in so as to support client engagement in the evaluation process.

The concept of cultural competence, however, is widely critiqued for two reasons. While cultural competence encourages respect for cultural variations, it may “emphasize similarities at the expense of individual differences” (Ortega and Faller, 2011, 31). Additionally, it may not push social workers to go far enough in holding themselves accountable for the power and privilege they hold in their roles and this is where cultural humility and cultural responsivity step in (Ortega and Faller, 2011).

Cultural humility may be thought of as a stance. In comparison to cultural competence, this concept emphasizes intersectionality, openness, transcendence and self-awareness about biases, privilege and power.

Intersectionality theory posits that people “simultaneously occupy multiple positions (such as gender, disability, race, etc.) within the socio-cultural-political and structural fabric of society (Ortega and Faller, 2011, 31). These different positions result in different experiences. Recognizing intersectionality is vital to cultural humility.

Openness is required so that social workers can acknowledge “that the experiences of others, outside of themselves, require them to be open to their experiences, from their perspective “ (Ortega and Faller, 2011, 32). In planning and conducting an evaluation, this means that social workers need to learn from the people they are helping, staff and clients.

Transcendence relates to “the fact that in efforts to know themselves personally and professionally…people must embrace the reality that the world is far more complex and dynamic then perhaps they can imagine (Ortega and Faller, 2011, 32).

Self-awareness about biases, privilege and power is necessary for social workers planning and conducting the evaluation process. Social workers “must appreciate who they are from a cultural perspective and how this shapes the lens through which they view” their evaluation work (Ortega and Faller, 2011, 31). If social workers draw solely on their own view of the world, they won’t be able to gather new insights, or awareness and won’t be able to change their behavior as a result. By being self-aware, social workers “will ideally awaken the worker to the power imbalance of workers and clients that may influence their response to the services they provide. (Ortega and Faller, 2011, 31).”

The following case example highlights the ways in which being self-aware and open are core competencies for practice evaluators. Imagine that your organization wanted to evaluate a social skills group for young women with disabilities. Let’s say you are working in an urban area with children from new immigrant families that are primarily low income. After asking around for evaluators in your area, you hire a man with experience doing school-based evaluations with the same age group as your clients in the suburbs outside of your city.

Holding a master’s degree in evaluation, “Jaleel” (not his real name) prepares for the evaluation work by reading up on your social skills curriculum and readies himself to collect data. During the process of running focus groups, Jaleel notices that it is difficult to get the young women to open up. Later, you learn that the young women felt Jaleel had a different way of speaking and acting than they were used to. As an upper middle class suburban social work evaluator, it appeared that social class and possibly gender played more of a factor than you anticipated. Considerations about the social identities of data gatherers is of paramount importance in evaluation design.

Finally, cultural responsivity may be thought of as an action. In thinking about the term cultural responsivity, we are focused on the need to be continual learners (vis-à-vis our client population) who demonstrate new cultural knowledge *back* to clients in a respectful manner. Culturally responsive evaluating can be thought of as using cultural knowledge, prior experiences, frames of reference, and presentations of ethnically or otherwise diverse clients to make evaluation experiences more relevant and effective for them (Gay, 2010).

Further, culturally responsive evaluation, then, is the expression of knowledge, beliefs, and values that recognize the importance of racial, cultural and other aspects of diversity in the practice of evaluation. Engaging in the work of culturally responsive evaluation requires certain practice evaluation competencies.

The following competencies are informed by Teel and Obidah (2008). Seeing cultural differences among evaluation team members and the client community as assets is vital to the first steps in culturally responsive evaluation. Creating caring organizations where culturally different people are valued, will inform the evaluation process. Demonstrating the use of cultural knowledge of diverse groups to inform evaluation will support client engagement in the process.

Being able to challenge stereotypes, prejudices, intolerance, injustice, and oppression at various points in the evaluation design and implementation process is necessary for culturally responsive evaluation. This includes the mediation of power imbalances in organizations based on varying social identities. Finally, evaluators need to accept that the practice of cultural responsivity is a required element of effective evaluation processes.

Exemplar questions that can be asked by a culturally responsive evaluator might focus on how evaluation results are delivered. For example: Are communication mechanisms culturally appropriate based on what we learned from stakeholders? “Does the reporting method meet stakeholder needs (both the message and the messenger)? Are the data presented in context, with efforts made to clarify issues and prevent misuse? Has the community benefited as anticipated? How? How has cultural responsiveness increased both the truthfulness and utility of the results? Do the action plans draw on community strengths and capacity? (CDC, 2014, 20).

In summary, practice evaluators must abide by ethical rules while conducting evaluations. Clients must understand that their participation is voluntary. Informed consent about the evaluation process must be provided, along with information about client confidentiality (vs. anonymity). As social workers committed to practice under the Code of Ethics, practice evaluators will want to consider the ways in which they can embrace cultural competence, cultural humility and cultural responsivity at all points in the evaluation process.

Discussion questions for chapter 9

  • Explain the central elements of informed consent in the practice evaluation process.
  • Describe the difference between confidentiality and anonymity as it pertains to practice evaluation.
  • In thinking about evaluating your own field placement, how would cultural competence, cultural humility and cultural responsivity play into your evaluation design?

References for chapter 9

American Evaluation Association (2011). Public Statement on Cultural Competence in Evaluation. Washington DC. Retrieved from: https://www.eval.org/p/cm/ld/fid=51

Centers for Disease Control and Prevention. (2014). Practical Strategies for Culturally Competent Evaluation. Atlanta, GA: US Dept of Health and Human Services.

Department of Health and Human Services (HHS). (2019). Cultural competence. Office of Population Affairs, Baltimore, MD. Retrieved from: https://www.hhs.gov/ash/oah/resources-and-training/tpp-and-paf-resources/cultural-competence/index.html

Gay, G. (2010) Culturally Responsive Teaching : Theory, Research, and Practice. 2nd ed. Multicultural Education Series (New York, N.Y.). New York: Teachers College.

Ortega, R. and Faller, K. (2011). Training child welfare workers from an intersectional cultural perspective: A paradigm shift. Child Welfare. 90(5), 27-50.

Teel, Karen Manheim., and Obidah, Jennifer E. Building Racial and Cultural Competence in the Classroom : Strategies from Urban Educators. New York, NY: Teachers College Press, 2008.