PresentationProgramEvaluationPowerPoint.pptx

1

PROGRAM EVALUATION SLIDES

DR. HAROLD K. “HARRY” MCGINNIS

HELMS SCHOOL OF GOVERNMENT

2018

1

Introduction to Evaluation
What is Evaluation?
What is not Evaluation ?
Is Evaluation Research?

2

2

What is Evaluation?
Judging the worth or merit of the evaluation object (Scriven, 1967).
Identification, clarification, and application of defensible criteria to determine an evaluation object’s value (worth or merit), quality, utility, effectiveness, or significance in relation to those criteria (Worthen, Sanders, & Fitzpatrick, 1997).
3

3

The Definition
The practice of evaluation involves the systematic collection of information about the activities, characteristics, and outcomes of programs, personnel, and products for use by specific people to reduce uncertainties, improve effectiveness and make decisions with regard to what those program, personnel, or products are doing and affecting (Patton).
4

4

Evaluation is not…

Measurement

Grading
Assessment

Accountability

Appraisal
5

5

Uses of Evaluation
Empowering or disempowering individuals.
Liberating or oppressing individuals.
Entertaining political agendas.
Supporting social activism and reform.
Maintaining status quo.
Evaluation has been put to as many uses as
there are human motives and agendas.

How do you intend to use evaluation?
6

6

Is Evaluation Research?
How do you distinguish between Evaluation and Research?

1. Evaluation is making a judgment about an object.
2. Research is drawing conclusions about an object based on evidence gathered from rigorous social science methods.

Are they different?
7

7

Evaluation and Research
8

Focus of Inquiry
Researchers
Conclusions

Evaluators
Decisions

8

Evaluation and Research
9

Generalizability of Results
Researchers
Nature of relationships among variables. (high)

Evaluators
Focused on a particular public program. (low)

9

Evaluation and Research
10

The Role of Valuing in the Inquiry
Researchers

Evaluators
Scientific proof without estimates of worth or value. (truth)

Quality Appraisal (worth)

10

11

Research Evaluation

New Knowledge, Truth Mission Achievement, Product Delivery

Generalizability Conclusions Specific Decisions

Explanatory and Predictive Power Determining Worth and Social Utility

Curiosity and Ignorance Needs and Goals

Cause and Effect Relationships Means-Ends Processes

Hypothesis Testing Assessing Attainment of an Objective

1. Experimental Method

1. Systems Approach

T1 X T2
T1 X T2
E
C
input

processing

input
CHARACTERISTIC

Purpose
Outcome
Value
Impetus
Conceptual Basis
Important Event
Classic Paradigm

11

12

Research Evaluation

2. Correlational Method

Control and Manipulation of Variance 2. Objectives Approach

Program Planning and Management

Internal and External Validity Fit Between the Expected and Obtained and Credibility

Pure and Applied
True Experimental
Quasi-Experimental Formative – Summative
Process – Product

means

objectives
output
CHARACTERISTIC

Classic Paradigm
Discipline
Criteria
Functional Types

12

Alternative Approaches to Program Evaluation
13
FORMATIVE
SUMMATIVE
LOGIC MODELS
CIPP
NATURALISTIC

13

Informal Evaluation:
An Every Day Event
Basic survival trait for all higher thinking individuals.

Give an example of how we make evaluative decisions throughout our day.

14

14

Formal Evaluation:
A Systematic Event
Formal, structured, and public in nature.

Choices are based on systematic efforts to define explicit criteria and obtain accurate information about alternatives.
15

15

Purpose of Formal Evaluation
To render judgments about the value of the evaluand that assists decision-makers in determining policy and serves a political function.

What other purposes are there?
16

16

Two General Types of Evaluation
Formative: Ongoing to provide feedback to program staff for program improvement.

Summative: Conducted after program is well established to provide decision makers with judgments about the program’s worth.
17

17

Formative vs. Summative
Which is more useful?
Which will you be conducting?
Which do you believe is the better use of
evaluation?

18

18

Formative Evaluation
needs assessment
evaluability assessment
structured conceptualization
implementation evaluation
process evaluation
19

19
Formative evaluation includes several evaluation types: needs assessment determines who needs the program, how great the need is, and what might work to meet the need; evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness; structured conceptualization helps stakeholders define the program or technology, the target population, and the possible outcomes; implementation evaluation monitors the fidelity of the program or technology delivery; and process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures.

Formative Questions
What is the question (definition and scope)
Where is the problem?
How serious is it?
How should the program be delivered to address the problem?
How well is the program delivered?
20

20
In formative research the major questions and methodologies are:
What is the definition and scope of the problem or issue, or what’s the question?
Formulating and conceptualizing methods might be used including brainstorming, focus groups, nominal group techniques, Delphi methods, brain writing, stakeholder analysis, synectics, lateral thinking, input-output analysis, and concept mapping.
Where is the problem and how big or serious is it?
The most common method used here is “needs assessment” which can include: analysis of existing data sources, and the use of sample surveys, interviews of constituent populations, qualitative research, expert testimony, and focus groups.
How should the program or technology be delivered to address the problem?
Some of the methods already listed apply here, as do detailing methodologies like simulation techniques, or multivariate methods like multiattribute utility theory or exploratory causal modeling; decision-making methods; and project planning and implementation methods like flow charting, PERT/CPM, and project scheduling.
How well is the program or technology delivered?
Qualitative and quantitative monitoring techniques, the use of management information systems, and implementation assessment would be appropriate methodologies here.

Formative Evaluation
Process evaluation is often equated with formative evaluation because they both provide information useful in service development and improvement.
The primary objective of formative evaluation is to improve and refine service operations and procedures on an ongoing basis, rather than waiting and retrospectively make decisions about whether the treatment service is effective.
The evaluator becomes involved in creating a more successful treatment service by providing input at the early stages of treatment planning and development.
21

21

Formative Evaluation
Formative evaluation activities change the notion of the evaluator as a neutral, detached observer.
Instead, the evaluator works closely with the personnel to influence service planning, development, and implementation
Evaluators must become familiar with multiple aspects of the service, providing personnel with information and insights to assist in improving the service.
As a result, revisions may be made in service organization, management, staffing, and activities
22

22

Summative Evaluation
outcome evaluation
impact evaluation
cost-effectiveness and cost-benefit
secondary analysis
meta-evaluation
23

23
Summative evaluation can also be subdivided: outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes; impact evaluation is broader and assesses the overall or net effects — intended or unintended — of the program or technology as a whole; cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values; secondary analysis reexamines existing data to address new questions or use methods not previously employed; meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgment on an evaluation question.

Summative Questions
What type of evaluation is feasible?
What is the effectiveness of the program?
What is the net impact of the program?
24

24
The questions and methods addressed under summative evaluation include:
What type of evaluation is feasible?
Evaluability assessment can be used here, as well as standard approaches for selecting an appropriate evaluation design.
What was the effectiveness of the program or technology?
One would choose from observational and correlational methods for demonstrating whether desired effects occurred, and quasi-experimental and experimental designs for determining whether observed effects can reasonably be attributed to the intervention and not to other sources.
What is the net impact of the program?
Econometric methods for assessing cost effectiveness and cost/benefits would apply here, along with qualitative methods that enable us to summarize the full range of intended and unintended impacts.

25

Objectives-
oriented
————–
Management-
oriented
Consumer-
oriented
Experience-
oriented
Adversary-
oriented
Naturalistic
& Participant-
oriented
Utilitarian
Evaluation
Intuitionist-
Pluralist
Evaluation
(House 1983)
(Assess Overall Impacts)
(Assess Value to Each Person)

25

26
OBJECTIVES-ORIENTED

Tylerian Models
Logic Models

26

LOGIC MODELS

27

What is the overarching goal you seek to achieve? Why are you
thinking about intervening?
Why does the problem exist? Why do you think it is amenable to
intervention? Do clients need or want intervention?
What is the environment in which your program operates? This can be
state and local issues -funding, leadership or organizational -capacity.
What resources are being used (or are needed) to operate the program?
For example, funding, staff, time, materials, space.
What are you doing (or do you plan to do), for whom, with the funding
you request?
What will the program produce immediately with respect to
participation and service delivery?
What are you trying to change with the intervention? What are
expected results? How will participants benefit?

27

A Logic Model
28
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 

 

 

 

(BICKMAN, 1987)

28

29

Planning decisions

Structuring decisions

Implementing decisions

Recycling decisions to judge and react to program attainments

Context evaluation

Input evaluation

Process evaluation

Product evaluation

CIPP MODEL: Stufflebeam et al., 1971

29

CIPP MODEL
Context evaluation assesses needs, assets, and problems within a defined environment.
Input evaluation assesses competing strategies and the work plans and budgets of the selected approach.
Process evaluation monitors, documents, and assesses program activities.
Impact evaluation assesses a program’s reach to the target audience.

30

Context Evaluation
assess the object’s overall status;
identify its deficiencies and strengths available to correct the weaknesses; and,
to diagnose problems that are limiting the object’s well-being.

31

provides a rationale for determining educational objectives. Context evaluation’s objectives are to:

31

Input Evaluation
Search out and critically examine potentially appropriate approaches intended to bring about change.
Identifies and rates relevant approaches and also thoroughly analyzes the one selected for implementation.
32

32

Process Evaluation
Focus is implementation of a program or strategy.
Purpose is to provide feedback about needed modification if the implementation is inadequate.
Observing and documenting program activities is important.
33

33

Product Evaluation
Measure, interpret, and judge the attainments of a program.
Determines the extent to which identified needs were met as well as broad program effects.

34

34

35

Proactive (Formative) Evaluation
Retroactive (Summative) Evaluation
Context
Input
Process
Product
Decision-making
Accountability
Types of Evaluation

35

36
UTILIZATION-FOCUSED EVALUATION

Utilization-Focused Evaluation -evaluations should be judged by their utility and actual use and evaluators should facilitate the evaluation process and design from beginning to end on how it will affect use.
Use concerns how real people in the real world apply evaluation findings and experience the evaluation process.
It answers the question of whose values will frame the evaluation by working with clearly identified, primary intended users who have responsibility to apply evaluation findings and implement recommendations.
Utilization-focused evaluation is highly personal and situational. Evaluator develops working relationship with intended users to help them determine what kind of evaluation they need. This requires negotiation over a menu of possibilities within the framework of established evaluation standards and principles.

36

Utilization-focused evaluation not advocate any evaluation content, model, method, theory, or use. It is a process for helping primary intended users select most appropriate content, model, methods, theory, and uses for their situation.
A utilization-focused evaluation can include any evaluative purpose
(formative or summative) any kind of data (quantitative, qualitative, mixed), any kind of design (e.g., CIPP, Systems, Logic), and any kind of focus (processes, outcomes, impacts, costs, and cost-benefit, among many possibilities).
Intended users more likely to use evaluations if they understand and own evaluation process and findings. More likely to understand and feel ownership if actively involved. Evaluator is training users in use & preparing groundwork for use

37
UTILIZATION-FOCUSED EVALUATION
(Michael Quinn Patton, 2002)

38

Types of Evaluators
Internal evaluator: A representative of the program or organization conducts the evaluation.
External evaluator: An outside person is hired to conduct the evaluation.

39

39

Roles of Evaluation
Formative
Appraisals of quality on programs that are still capable of modification.
Inform program developers how to correct deficiencies.
Summative
Focus on completed programs.

Determine worth to focus on adoption decisions.
40
External
Internal

40

Evaluation Questions
The purpose of the evaluation is to answer some question that the sponsors/clients/ stakeholders have about their program.
Questions provide direction and foundation for the study.

41

41

42
WHERE DO EVALUATION QUESTIONS ORIGINATE?
POLITICAL MANDATES
PLANS
RESEARCH
JOURNALISM

42

Who Determines Evaluation Questions, Criteria, & Standards?
? The evaluator
?? Stakeholders
??? Other experts
???? The literature
????? Other programs

43

43

Determining
Evaluation Questions
Divergent Phase: Evaluator collects as many questions as possible from as many stakeholders as possible. ?
Goal: to understand the context of the evaluand and to gather concerns, issues, and consequences (CCI) of the program stakeholders.
44

44

45
REFINING QUESTIONS AND ANSWERING THEM IS
DIVERGENT AND CONVERGENT
7 Step Process: Convergent – Divergent Phases

Deter-
mine
the
theme

Ob-
serve
current
situa-
tion
Ana-
lyze
root
causes
Deve-
lop
im-
prove-
ments
Verify
the
results
Stand-
ardize
im-
prove-
ments
Con-
clude
the
project
(1)
(2)
(3)
(4)
(5)
(6)
(7)

45

Possible Sources of
Divergent Questions
CCI of stakeholders (Concerns, Issues, Consequences of the program).
Evaluation approaches.
Models from the literature (theory-based evaluation).
Professional standards and checklists.
Expert views.
Evaluators professional judgment.
46

46

CCI of Stakeholders
Most important source for identifying evaluation questions.
Identify stakeholders to include:
Policy Makers
Administrators;
Practitioners;
Primary Consumers; and,
Secondary Consumers.
47

47

CCI of Stakeholders
Conduct informal interviews with stakeholders to ask general questions about current thoughts on the program.
Free association works best to uncover CCI.
Don’t focus interviews at this early stage.
Simply listen and learn.

48

48

Models from the Literature
(Theory-based Evaluation)
Do a literature search for the constructs.
What does the literature say?
Find out if other evaluations have been conducted on programs similar to yours.
Use the literature to guide question formation by understanding causes to the problems your program is designed to solve.
49

49

When Are You Done
Collecting Questions?

When you have reached data saturation.
No new questions are emerging in your search.
Can you organize your questions into categories such as Context, Process, and Outcomes?
If so, you have enough questions to develop the evaluation plan.
Time to stop searching for new questions and move toward the convergent phase.
50

50

Convergent Phase
The need to focus the study into a manageable project that will yield important and useful results.
Narrow down laundry list of questions to the most critical.
?
51

51

Cronbach’s Seven Criteria
Ask yourself these questions as you decide which questions on the list to keep or drop:
Who needs the information?
Would an answer to this question
reduce uncertainty about the
program?
Would an answer to this question yield important
information?
52

52

Cronbach’s Seven Criteria (cont.)
4. Does this question focus on critical continued interest to the program?
5. Does this question increase the scope of the evaluation?
6. Will answering this question impact programming?
7. Can this question be answered?
53

53

Developing Criteria and Standards
Each question needs a matching standard.
Setting the bar for success: Where is it?
What criteria will you use to determine the extent to which the evaluand is addressing stakeholder’s concerns and issues?
What level of performance is acceptable to determine program success or failure?
54

54

Standards
Standards reflect the degree of difference between a treatment and non-treatment group that would be considered sufficiently meaningful to adopt a new program.
Absolute vs. Relative standards: which applies to your program?
55

55

Standards
Each program has unique standards and criteria.
Need to negotiate each criteria and standard with stakeholders prior to conducting the evaluation.
Need to be realistic and consistent with the literature to have a credible evaluation study.
56

56

Final Thoughts
Should evaluation questions change during the evaluation?
Absolutely.
Should you stick to the budget originally outlined?
Not necessarily.
What if conditions change during the study?
Renegotiate with your sponsors.
Remain flexible at all times.
57

57

Guiding Principles for Evaluation: American Evaluation Association
Systematic inquiry: Evaluators conduct systematic, data-based inquiries about whatever is being evaluated.
Competence: Evaluators provide competent performance to stakeholders.
Integrity/Honesty: Evaluators ensure the honesty and integrity of the entire evaluation process.
Respect for People: Evaluators respect the security, dignity, and self-worth of the respondents, program participants, clients, and other stakeholders with whom they interact.
Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of interests and values that may be related to the general and public welfare.
58

58

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

Order your essay today and save 30% with the discount code HAPPY