RoyseCh11.pptx

Chapter 11:
Measurement Tools and Strategies

© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

Why is Good Measurement So Important to Evaluation?
Provide evaluators with a certain amount of precision in arriving at magnitude of client problems and determining any consequent change in those problems.
Important to select instruments that are (a) good indicators of what programs are attempting to accomplish and (b) psychometrically strong.
Not always necessary to use a paper-and-pencil instrument to measure program outcomes.

© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

What Should We Measure?
Ask “How could program’s ‘success’ be demonstrated?”
What is a program trying to accomplish?
If it fails, how would that failure be noted?
Programs can be evaluated with routinely collected data.
Can be evaluated in terms of behavioral outcomes without interviewing, observing, or distributing questionnaires to program recipients.

© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

What Should We Measure?
Behavioral data can include specific physiological measurements.
Can be obtained through the use of client self-monitoring or taping client interactions.
Behavioral outcomes not always available to evaluator.
Sometimes prevention programs measure whether program recipients have increased their knowledge of a given problem.
Main goal may be to change the participants’ attitudes about some behavior or practice.

© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

What Should We Measure?
Often easier to measure attitudes and knowledge than behavior.
Disadvantage is that attitudes and knowledge may not be directly related to behavior.
Connection between attitudes, knowledge and behavior is tenuous at best.
© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

Reliability
Instrument is reliable when it consistently and dependably measures concepts with accuracy.
Administrating it to similar groups yields similar results.
Provide a consistent frame of reference.
Several ways to demonstrate reliability:
Internal consistency: Each item examined for how well it correlates to the scale as a whole.
Split-half technique: Involves dividing scale in half and examining how well they correlate with each other.
Test-retest: Demonstrated when scale holds up well when administered to the same individuals on repeated occasions.
As a rule, adding items to a scale increases reliability.

© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

The Reliability of Procedures
Reliability a concern if you revise or devise an instrument to use in evaluation or rely on secondary data.
Concern not with the items used to create scales, but with the data-gathering and reporting procedures.
Most social indicators vastly underestimate the true incidence of social problems in our country.
If data involves judgments, take steps to assure high inter-rater reliability.
If agreement cannot be reached at least 70% of the time or scores to do correlate at least at .70, there is not adequate inter-rater reliability.
Can be improved through training and role-playing.

© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

7

Validity
Instrument is valid when it closely corresponds to the concept it was designed to measure.
Content validity: Established by asking experts to review if the entire range of the concept represented by the sample of items selected for the scale.
Face validity: Used when colleagues or other knowledgeable persons agree that an instrument appears to measure the concept.
Neither content or face validity is sufficient for establishing that the scale has “true” validity.
Developer must amass evidence that the scale really does measure what was intended.
© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

Validity
Criterion validity: Instrument can be validated by an external criterion.
Best external criterion may not always be easy to select.
Generally categorized as either predictive (of future behavior or performance) or concurrent (ability to predict current status).
Concurrent validity involves administering new scale with a scale from previous studies that have be shown to be valid.
Construct validity: Concerned with the theoretical relationship of the scale to other variables.
Involves testing presumed relationships and hypotheses.

© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

Validity
With the known-groups technique, investigator administers instrument to two very different groups expecting major differences in responses.
Statistically significant differences between the two groups shows evidence of construct validity.
If an instrument can be empirically demonstrated to have validity, then it can generally be assumed to be reliable.
Both should be demonstrated as evidence an instrument is psychometrically strong.
© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

How to Find an Evaluation Instrument
Measures for Clinical Practice and Research (Fisher & Corcoran, 2013):
Contains scales and descriptive information about 500 rapid assessment instruments.
The Handbook of Psychiatric Measures (Rush, First & Blacker, 2007):
Provides information on a variety of diagnostic measures.
Outcomes Assessment in Clinical Practice (Sederer & Dickey, 1996):
Contains information on 18 instruments.
© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

How to Find an Evaluation Instrument
Walmyr Publishing Company (www.walmyr.com):
Offers a wide variety of useful scales for purchase.
Test Reviews Online (http://buros.unl.edu/buros/jsp/search.jsp):
Has a database on 4,000 commercially available tests for purchase.
Still looking? Try a literature search.
Ask faculty members who are researchers.
If you still cannot find a relevant scale, may be time to consider developing your own instrument.

© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

Constructing “Good” Instruments for Evaluation
First step: Consider exactly what is needed to evaluate the program.
Choices generally involve dimensions of knowledge, behavior and symptoms, and attitudes, beliefs, opinions.
Easiest data collection instruments to have a single focus.
Not uncommon for instruments to tap many dimensions.
Second step: Determine how to administer the data collection instrument.
Questionnaires can be self-administered or the respondent can be interviewed or observed.
Telephone surveys, electronic surveys and emailed questionnaires are also options.
© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

Constructing “Good” Instruments for Evaluation
Third step: Create a pool of items for potential use.
Length should be servant to purpose and psychometrics.
Determine response set (i.e. closed-end or a five-point Likert scale).
Multiple response choices help reduce measurement error and bias and make it easier to analyze data.
Good to include one or two open-ended questions to possibly learn things not known or suspected.
Fourth step: Refine the data collection instrument.
Consider question sequencing, difficulty, personal information, memory, length and appearance, and cultural sensitivity.
© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

Common Errors Made in Developing Questionnaires
Wrong, misspelled or omitted words or incorrect grammar.
Proofread and make sure questions are not more complex than they need to be.
Unbalanced response choices.
Balance positive and negative responses to avoid bias in one direction or the other.
Vague terms.
Avoid undefined words like “often” or “regular”.
Questions that have choices that are not mutually exclusive.
© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

Common Errors Made in Developing Questionnaires
Double-barreled questions.
Do not ask two things in one sentence.
Asking for information the respondent cannot be reasonably expected to have.
Asking leading questions.
Using stigmatizing terms.
Once satisfied with questions, conduct a pilot study to identify problems respondents have with the instrument.

© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

Levels of Measurement
For most purposes not necessary to make distinctions between ratio and interval levels of measurement when planning how data are to be analyzed.
For simple descriptions, variable measured at the nominal level work fine.
Ordinal variables are appropriately used for describing samples.
Both nominal and ordinal variables can be used to test whether the proportions in groups are similar.
Data recorded at the nominal or ordinal level cannot usually be transformed into interval level data.
Interval/ratio level of measurement is often desired for dependent variables.

© 2016 Cengage Learning. All rights reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license distributed with a certain product or service or otherwise on a password-protected website for classroom use.

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

Order your essay today and save 30% with the discount code HAPPY