Health Program Planning and Evaluation A Practical, Systematic Approach for Community Health

L. Michele Issel, PhD, RN Professor of PhD Program

University of North Carolina at Charlotte College of Health and Human Services

Charlotte, North Carolina

Rebecca Wells, PhD, MHSA Professor

The University of Texas School of Public Health

Houston, Texas

Health Program Planning and Evaluation A Practical, Systematic Approach for Community Health

FOURTH EDITION

 

 

World Headquarters Jones & Bartlett Learning 5 Wall Street Burlington, MA 01803 978-443-5000 info@jblearning.com www.jblearning.com

Jones & Bartlett Learning books and products are available through most bookstores and online booksellers. To contact Jones & Bartlett Learning directly, call 800-832-0034, fax 978-443-8000, or visit our website, www.jblearning.com.

Substantial discounts on bulk quantities of Jones & Bartlett Learning publications are available to corporations, professional associations, and other qualified organizations. For details and specific discount information, contact the special sales department at Jones & Bartlett Learning via the above contact information or send an email to specialsales@jblearning.com.

Copyright © 2018 by Jones & Bartlett Learning, LLC, an Ascend Learning Company

All rights reserved. No part of the material protected by this copyright may be reproduced or utilized in any form, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without written permission from the copyright owner.

The content, statements, views, and opinions herein are the sole expression of the respective authors and not that of Jones & Bartlett Learning, LLC. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not constitute or imply its endorsement or recommendation by Jones & Bartlett Learning, LLC and such reference shall not be used for advertising or product endorsement purposes. All trademarks displayed are the trademarks of the parties noted herein. Health Program Planning and Evaluation: A Practical, Systematic Approach for Community Health, Fourth Edition is an independent publication and has not been authorized, sponsored, or otherwise approved by the owners of the trademarks or service marks referenced in this product.

There may be images in this book that feature models; these models do not necessarily endorse, represent, or participate in the activities represented in the images. Any screenshots in this product are for educational and instructive purposes only. Any individuals and scenarios featured in the case studies throughout this product may be real or fictitious, but are used for instructional purposes only.

This publication is designed to provide accurate and authoritative information in regard to the Subject Matter covered. It is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional service. If legal advice or other expert assistance is required, the service of a competent professional person should be sought.

15842-7

Production Credits VP, Executive Publisher: David D. Cella Publisher: Michael Brown Associate Editor: Danielle Bessette Vendor Manager: Nora Menzi Senior Marketing Manager: Sophie Fleck Teague Manufacturing and Inventory Control Supervisor: Amy Bacus Composition and Project Management: S4Carlisle Publishing

Services

Cover Design: Scott Moden Director of Rights & Media: Joanna Gallant Rights & Media Specialist: Merideth Tumasz Media Development Editor: Shannon Sheehan Cover Image: © Lynne Nicholson/Shutterstock Printing and Binding: Edwards Brothers Malloy Cover Printing: Edwards Brothers Malloy

Library of Congress Cataloging-in-Publication Data Names: Issel, L. Michele, author. | Wells, Rebecca, 1966- author. Title: Health program planning and evaluation: a practical, systematic approach for community health/L. Michele Issel and Rebecca Wells. Description: Fourth edition. | Burlington, MA: Jones & Bartlett Learning, [2018] | Includes bibliographical references and index. Identifiers: LCCN 2017010386 | ISBN 9781284112115 (pbk.) Subjects: | MESH: Community Health Services—organization & administration | Program Development—methods | Health Planning—methods | Program Evaluation—methods | United States Classification: LCC RA394.9 | NLM WA 546 AA1 | DDC 362.12068—dc23 LC record available at https://lccn.loc.gov/2017010386

6048

Printed in the United States of America 21 20 19 18 17 10 9 8 7 6 5 4 3 2 1

 

 

iii

© Lynne Nicholson/Shutterstock

Contents List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

List of Exhibits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

Preface to the Fourth Edition . . . . . . . . . . . . . . . . . . xix

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv

List of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxvii

SECTION I The Context of Health Program Development 1

Chapter 1 Context of Health Program Development and Evaluation . . . . . . . . . . . . . . . 3

History and Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Concept of Health . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Health Programs, Projects, and Services . . . . . . 4

History of Health Program Planning and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Evaluation as a Profession . . . . . . . . . . . . . . . . . . . . . . . . . 8

Who Does Planning and Evaluations? . . . . . . .10

Roles of Evaluators . . . . . . . . . . . . . . . . . . . . . . . . . .10

Planning and Evaluation Cycle . . . . . . . . . . . . . . . . . . .11

Interdependent and Cyclic Nature of Planning and Evaluation . . . . . . . . . . . . . . .11

Using Evaluation Results as the Cyclical Link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13

Program Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . .13

The Fuzzy Aspects of Planning . . . . . . . . . . . . . . . . . . .14

Paradoxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14

Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16

Uncertainty, Ambiguity, Risk, and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17

Introduction to the Types of Evaluation . . . . . . . . . .19

Mandated and Voluntary Evaluations . . . . . . .20

When Not to Evaluate . . . . . . . . . . . . . . . . . . . . . .21

The Public Health Pyramid . . . . . . . . . . . . . . . . . . . . . . .21

Use of the Public Health Pyramid in Program Planning and Evaluation . . . . . . . .23

The Public Health Pyramid as an Ecological Model . . . . . . . . . . . . . . . . . . . . .23

The Town of Layetteville in Bowe County . . . . . . . . .25

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25

Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . .27

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27

Chapter 2 Relevance of Diversity and Disparities to Health Programs . . . . . . . . . . . . . . . . . . 29

Health Disparities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30

Diversity and Health Disparities . . . . . . . . . . . . .32

Diversity and Health Programs . . . . . . . . . . . . . .33

Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33

Interventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38

Influences of Sociocultural Diversity on Interventions . . . . . . . . . . . . . . . . . . . . . . . . .38

Influences of Biological Diversity on Interventions . . . . . . . . . . . . . . . . . . . . . . . . .39

Approaches to Developing Programs . . . . . . .39

Profession and Provider Diversity . . . . . . . . . . . .40

The Three Health Provider Sectors . . . . . . . . . .43

Diversity Within Healthcare Organizations and Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43

Organizational Culture . . . . . . . . . . . . . . . . . . . . . .44

Cultural Competency Continuum . . . . . . . . . . .44

Enhancing Cultural Competency . . . . . . . . . . .48

 

 

iv Contents

Types of Assessments . . . . . . . . . . . . . . . . . . . . . . . . . . . .75

Organizational Assessment . . . . . . . . . . . . . . . . .75

Marketing Assessment . . . . . . . . . . . . . . . . . . . . . .76

Needs Assessment . . . . . . . . . . . . . . . . . . . . . . . . .76

Community Health Assessment . . . . . . . . . . . . .77

Workforce Assessment . . . . . . . . . . . . . . . . . . . . . .77

Steps in Planning and Conducting the Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77

Form and Develop the Team . . . . . . . . . . . . . . . .78

Create a Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79

Involve Community Members . . . . . . . . . . . . . .79

Define the Population . . . . . . . . . . . . . . . . . . . . . .80

Define the Problem to Be Assessed . . . . . . . . .81

Investigate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81

Prioritize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82

Make a Decision . . . . . . . . . . . . . . . . . . . . . . . . . . . .82

Implement and Continue . . . . . . . . . . . . . . . . . . .83

Anticipate Data-Related and Methodological Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85

Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . .85

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87

Chapter 4 Characterizing and Defining the Health Problem . . . . . . . . . . . . . . . . . . . 91

Collecting Data From Multiple Sources . . . . . . . . . . .91

Public Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91

Primary Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92

Observational Data . . . . . . . . . . . . . . . . . . . . . . . . .92

Archival Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93

Proprietary Data . . . . . . . . . . . . . . . . . . . . . . . . . . . .93

Published Literature . . . . . . . . . . . . . . . . . . . . . . . .93

Data Beyond Street Lamp . . . . . . . . . . . . . . . . . . .93

Collecting Descriptive Data . . . . . . . . . . . . . . . . . . . . . .94

Magnitude of the Problem . . . . . . . . . . . . . . . . . .94

Dynamics Leading to the Problem . . . . . . . . . .94

Population Characteristics . . . . . . . . . . . . . . . . . .96

Attitudes and Behaviors . . . . . . . . . . . . . . . . . . . .96

Years of Life and Quality of Life . . . . . . . . . . . . . .96

Stakeholders and Coalitions . . . . . . . . . . . . . . . . . . . . .50

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51

Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . .53

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54

SECTION II Defining the Health Problem 57

Chapter 3 Community Health Assessment for Program Planning . . . . . . . . 59

Defining Community . . . . . . . . . . . . . . . . . . . . . . . . . . . .59

Community as Context and Intended Recipient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60

Defining Terms: Based, Focused, and Driven . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61

Types of Needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62

Types of Strengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63

Approaches to Planning . . . . . . . . . . . . . . . . . . . . . . . . .64

Incremental Approach . . . . . . . . . . . . . . . . . . . . . .64

Apolitical Approach . . . . . . . . . . . . . . . . . . . . . . . .66

Advocacy Approach . . . . . . . . . . . . . . . . . . . . . . . .66

Communication Action Approach . . . . . . . . . .67

Comprehensive Rational Approach . . . . . . . . .67

Strategic Planning Approach . . . . . . . . . . . . . . .68

Summary of Approaches . . . . . . . . . . . . . . . . . . .69

Models for Planning Public Health Programs . . . . .69

Mobilizing for Action through Planning and Partnership (MAPP) . . . . . . . . . . . . . . . . . .70

Community Health Improvement Process (CHIP) . . . . . . . . . . . . . . . . . . . . . . . . . . . .70

Protocol for Assessing Community Excellence in Environmental Health (PACE-EH) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70

In Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70

Perspectives on Assessment . . . . . . . . . . . . . . . . . . . . .71

Epidemiological Perspective . . . . . . . . . . . . . . . .72

Public Health Perspective . . . . . . . . . . . . . . . . . . .74

Social Perspective . . . . . . . . . . . . . . . . . . . . . . . . . .74

Asset Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . .74

Rapid Perspective . . . . . . . . . . . . . . . . . . . . . . . . . .75

 

 

v Contents

Path to Program Outcomes and Impacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

Components of the Effect Theory . . . . . . . . . 135

Matching Levels: Audience, Cause, Intervention, and Effects . . . . . . . . . . . . . . . 137

Generating the Effect Theory . . . . . . . . . . . . . . . . . . 138

Involve Key Stakeholders . . . . . . . . . . . . . . . . . 138

Draw Upon the Scientific Literature . . . . . . . 138

Diagram the Causal Chain of Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

Check Against Assumptions . . . . . . . . . . . . . . 141

Functions of Program Theory . . . . . . . . . . . . . . . . . . 141

Provide Guidance . . . . . . . . . . . . . . . . . . . . . . . . . 141

Enable Explanations . . . . . . . . . . . . . . . . . . . . . . 142

Form a Basis for Communication . . . . . . . . . . 142

Make a Scientific Contribution . . . . . . . . . . . . 143

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Discussion Questions and Activities . . . . . . . . . . . . 144

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

Chapter 6 Program Objectives and Setting Targets . . . . . . . . 147

Program Goals and Objectives . . . . . . . . . . . . . . . . . 147

Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

Foci of Objectives . . . . . . . . . . . . . . . . . . . . . . . . . 148

Objectives and Indicators . . . . . . . . . . . . . . . . . 151

Good Goals and Objectives . . . . . . . . . . . . . . . 154

Using Data to Set Target Values . . . . . . . . . . . . . . . . 156

Decisional Framework for Setting Target Values . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

Stratification and Object Target Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

Use of Logic Statements to Develop Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

Options for Calculating Target Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

Caveats to the Goal-Oriented Approach . . . . . . . 170

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

Discussion Questions and Activities . . . . . . . . . . . . 171

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

Statistics for Describing Health Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99

Descriptive Statistics . . . . . . . . . . . . . . . . . . . . . . 100

Geographic Information Systems: Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

Small Numbers and Small Areas . . . . . . . . . . 101

Epidemiology Rates . . . . . . . . . . . . . . . . . . . . . . 102

Stating the Health Problem . . . . . . . . . . . . . . . . . . . . 102

Diagramming the Health Problem . . . . . . . . 102

Writing a Causal Theory of the Health Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

Prioritizing Health Problems . . . . . . . . . . . . . . . . . . . 110

Nominal Group Technique . . . . . . . . . . . . . . . . 111

Basic Priority Rating System . . . . . . . . . . . . . . . 111

Propriety, Economics, Acceptability, Resources, and Legality (PEARL) Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

Prioritizing Based on Importance and Changeability . . . . . . . . . . . . . . . . . . . . . 114

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Discussion Questions and Activities . . . . . . . . . . . . 117

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

SECTION III Health Program Development and Planning 121

Chapter 5 Program Theory and Interventions Revealed . . . . . . . . . . . . . . . . . . 123

Program Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Process Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Effect Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Interventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

Finding and Identifying Interventions . . . . . 126

Types of Interventions . . . . . . . . . . . . . . . . . . . . 127

Specifying Intervention Administration and Dosage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

Interventions and Program Components . . . . 130

Characteristics of Good Interventions . . . . . 131

 

 

vi Contents

Budgeting as Part of Planning . . . . . . . . . . . . . . . . . . 204

Monetize and Compute Program Costs . . . . . 204

Budget for Start-Up and Evaluation Costs . . . 205

Break-Even Analysis . . . . . . . . . . . . . . . . . . . . . . . 205

Budget Justification . . . . . . . . . . . . . . . . . . . . . . 207

Budget as a Monitoring Tool . . . . . . . . . . . . . . . . . . . 209

Budget Variance . . . . . . . . . . . . . . . . . . . . . . . . . . 209

Types of Cost Analyses . . . . . . . . . . . . . . . . . . . . 209

Information Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

Health Informatics Terminology . . . . . . . . . . . 214

Information Systems Considerations . . . . . . 214

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216

Discussion Questions and Activities . . . . . . . . . . . . 217

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

Chapter 9 Implementation Evaluation: Measuring Inputs and Outputs . . . . . . . . . . . . . . . 219

Assessing the Implementation . . . . . . . . . . . . . . . . . 219

Implementation Documentation . . . . . . . . . 220

Implementation Assessment . . . . . . . . . . . . . 221

Implementation Evaluation . . . . . . . . . . . . . . . 221

Efficacy, Effectiveness, and Efficiency . . . . . . . . . . . 222

Data Collection Methods . . . . . . . . . . . . . . . . . . . . . . 223

Quantifying Inputs to the Organizational Plan . . . . . . . . . . . . . . . . . . . 223

Human Resources . . . . . . . . . . . . . . . . . . . . . . . . 228

Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . 229

Quantifying Outputs of the Organizational Plan . . . . . . . . . . . . . . . . . . . 230

Information Systems . . . . . . . . . . . . . . . . . . . . . . 230

Monetary Resources . . . . . . . . . . . . . . . . . . . . . . 230

Quantifying Inputs to the Services Utilization Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230

Participants and Recipients . . . . . . . . . . . . . . . 230

Intervention Delivery and Fidelity . . . . . . . . . 231

Quantifying Outputs of the Services Utilization Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234

Coverage as Program Reach . . . . . . . . . . . . . . 234

Participant-Related Issues . . . . . . . . . . . . . . . . . 238

Program Logistics . . . . . . . . . . . . . . . . . . . . . . . . . 240

SECTION IV Implementing and Monitoring the Health Program 173

Chapter 7 Process Theory for Program Implementation . . . . . . . . . . . 175

Organizational Plan Inputs . . . . . . . . . . . . . . . . . . . . . 175

Human Resources . . . . . . . . . . . . . . . . . . . . . . . . 177

Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . 179

Transportation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

Informational Resources . . . . . . . . . . . . . . . . . . 180

Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

Managerial Resources . . . . . . . . . . . . . . . . . . . . 180

Fiscal Resources . . . . . . . . . . . . . . . . . . . . . . . . . . 182

Organizational Plan Outputs . . . . . . . . . . . . . . . . . . . 182

Time Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

Operations Manual . . . . . . . . . . . . . . . . . . . . . . . 182

Organizational Chart . . . . . . . . . . . . . . . . . . . . . . 184

Information System . . . . . . . . . . . . . . . . . . . . . . . 185

Inputs to Service Utilization Plan . . . . . . . . . . . . . . . 185

Social Marketing . . . . . . . . . . . . . . . . . . . . . . . . . . 185

Eligibility Screening . . . . . . . . . . . . . . . . . . . . . . . 185

Queuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

Intervention Delivery . . . . . . . . . . . . . . . . . . . . . 189

Services Utilization Plan Outputs . . . . . . . . . . . . . . . 191

Summary: Elements of Organizational and Services Utilization Plans . . . . . . . . . . . 192

Alternative Plan Formats . . . . . . . . . . . . . . . . . . . . . . . 192

Logic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

Business Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

Discussion Questions and Activities . . . . . . . . . . . . 197

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

Chapter 8 Monitoring Implementation Through Budgets and Information Systems . . . . . . . 201

Budgets and Budgeting . . . . . . . . . . . . . . . . . . . . . . . 201

Budgeting Terminology . . . . . . . . . . . . . . . . . . . 202

 

 

vii Contents

Evaluation and Research . . . . . . . . . . . . . . . . . . 268

Rigor in Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 270

Variables from the Program Effect Theory . . . . . . 271

Outcome and Impact Dependent Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271

Causal Factors as Independent Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273

Antecedent, Moderating, and Mediating Factors as Variables . . . . . . . . . . 273

Measurement Considerations . . . . . . . . . . . . . . . . . . 275

Units of Observation . . . . . . . . . . . . . . . . . . . . . . 275

Types of Variables (Levels of Measurement) . . . . . . . . . . . . . . . . . . . . . . . 275

Timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278

Sensitivity of Measures . . . . . . . . . . . . . . . . . . . . 278

Threats to Data Quality . . . . . . . . . . . . . . . . . . . . . . . . 279

Missing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279

Reliability Concerns . . . . . . . . . . . . . . . . . . . . . . . 280

Validity of Measures . . . . . . . . . . . . . . . . . . . . . . 281

Contextual Considerations in Planning the Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

Evaluation Standards . . . . . . . . . . . . . . . . . . . . . 281

Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282

Stakeholders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284

Discussion Questions and Activities . . . . . . . . . . . . 284

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285

Chapter 12 Choosing Designs for Effect Evaluations . . . . . . . . . 287

Evaluation Design Caveats . . . . . . . . . . . . . . . . . . . . . 288

Considerations in Choosing a Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289

Using Designs Derived from Multiple Paradigms: An Example . . . . . . . . . . . . . . . . 294

Choosing the Evaluation Design . . . . . . . . . . . . . . . 294

Identifying Design Options . . . . . . . . . . . . . . . 294

Overview of the Decision Tree . . . . . . . . . . . . 295

Designs for Outcome Documentation . . . . 298

Designs for Outcome Assessment: Establishing Association . . . . . . . . . . . . . . . . 301

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241

Discussion Questions and Activities . . . . . . . . . . . . 242

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243

Chapter 10 Program Quality and Fidelity: Managerial and Contextual Considerations . . . . . . . . . . . . 245

The Accountability Context . . . . . . . . . . . . . . . . . . . . 246

Program Accountability . . . . . . . . . . . . . . . . . . . 246

Professional Accountability . . . . . . . . . . . . . . . 246

Performance and Quality: Navigating the Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247

Quality Improvement Approaches . . . . . . . . 248

Quality Improvement Tools . . . . . . . . . . . . . . . 248

Relevance to Health Programs . . . . . . . . . . . . 251

Performance Measurement . . . . . . . . . . . . . . . 252

Informatics and Information Technology . . . .253

Creating Change for Quality and Fidelity . . . . . . . 255

Interpreting Implementation Data . . . . . . . . 255

Maintaining Program Process Quality and Fidelity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257

Managing Group Processes for Quality and Fidelity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258

When and What Not to Change . . . . . . . . . . . 259

Formative Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . 259

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259

Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 260

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

SECTION V Outcome and Impact Evaluation of Health Programs 263

Chapter 11 Planning the Intervention Effect Evaluations . . . . . . . . . 265

Developing the Evaluation Questions . . . . . . . . . . 266

Characteristics of the Right Question . . . . . 267

Outcome Documentation, Outcome Assessment, and Outcome Evaluation . . . 268

 

 

viii Contents

Issues with Quantifying Change from the Program . . . . . . . . . . . . . . . . . . . . . . 339

Relationship of Change to Intervention Effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342

Clinical and Statistical Significance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343

Across Levels of Analysis . . . . . . . . . . . . . . . . . . . . . . . 343

Statistical Answers to the Questions . . . . . . . . . . . 345

Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346

Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348

Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349

Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350

Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353

Four Fallacies of Interpretation . . . . . . . . . . . . 353

Ecological Fallacy . . . . . . . . . . . . . . . . . . . . . . . . . 354

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356

Discussion Questions and Activities . . . . . . . . . . . . 356

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357

Chapter 15 Qualitative Methods for Planning and Evaluation . . . . . . . . . . . . . . . 359

Qualitative Methods Throughout the Planning and Evaluation Cycle . . . . . . . . . . . . . . 359

Qualitative Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 360

Individual In-Depth Interview . . . . . . . . . . . . 361

Written Open-Ended Questions . . . . . . . . . . . 362

Focus Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363

Observation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364

Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . 364

Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365

Innovative Methods . . . . . . . . . . . . . . . . . . . . . . 366

Scientific Rigor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368

Sampling for Qualitative Methods . . . . . . . . . . . . . 369

Analysis of Qualitative Data . . . . . . . . . . . . . . . . . . . . 372

Overview of Analytic Process . . . . . . . . . . . . . 372

Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374

Issues to Consider . . . . . . . . . . . . . . . . . . . . . . . . 374

Presentation of Findings . . . . . . . . . . . . . . . . . . . . . . . 375

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376

Designs for Outcome Evaluation: Establishing Causation . . . . . . . . . . . . . . . . . 307

Practical Issues with Experimental Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307

Designs and Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . 309

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311

Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 312

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312

Chapter 13 Sampling Designs and Data Sources for Effect Evaluations . . . . . . . . . 315

Sampling Realities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315

Sample Construction . . . . . . . . . . . . . . . . . . . . . . . . . . 317

Hard-to-Reach Populations . . . . . . . . . . . . . . . 318

Sample Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318

Calculating Response Rates . . . . . . . . . . . . . . . 319

Sampling for Effect Evaluations . . . . . . . . . . . . . . . . 322

Sampling for Outcome Assessment . . . . . . . 322

Sampling for Outcome Evaluation . . . . . . . . 324

Data Collection Methods . . . . . . . . . . . . . . . . . . . . . . 324

Surveys and Questionnaires . . . . . . . . . . . . . . 325

Secondary Data . . . . . . . . . . . . . . . . . . . . . . . . . . 328

Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329

Physical Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330

Discussion Questions and Activities . . . . . . . . . . . . 330

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331

Chapter 14 Quantitative Data Analysis and Interpretation . . . . . . . . . . . . 335

Data Entry and Management . . . . . . . . . . . . . . . . . . 335

Outliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337

Linked Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337

Sample Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338

Thinking About Change . . . . . . . . . . . . . . . . . . . . . . . 339

Change as a Difference Score . . . . . . . . . . . . . 339

 

 

ix Contents

Reporting Responsibly . . . . . . . . . . . . . . . . . . . . . . . . . 392

Report Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392

Making Recommendations . . . . . . . . . . . . . . . 394

Misuse of Evaluations . . . . . . . . . . . . . . . . . . . . . 397

Responsible Contracts . . . . . . . . . . . . . . . . . . . . . . . . . 398

Organization–Evaluator Relationship . . . . . . 398

Health Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399

Responsible for Evaluation Quality . . . . . . . . . . . . . 400

Responsible for Dissemination . . . . . . . . . . . . . . . . . 401

Responsible for Current Practice . . . . . . . . . . . . . . . 402

Across the Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404

Discussion Questions and Activities . . . . . . . . . . . . 405

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .409

Discussion Questions and Activities . . . . . . . . . . . . 377

Internet Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377

SECTION VI Additional Considerations for Evaluators 381

Chapter 16 Program Evaluators’ Responsibilities . . . . . . . . . . . 383

Ethical Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . 383

Ethics and Planning . . . . . . . . . . . . . . . . . . . . . . . 383

Institutional Review Board Approval and Informed Consent . . . . . . . . . . . . . . . . . 385

Ethics and Evaluation . . . . . . . . . . . . . . . . . . . . . 387

HIPAA and Evaluations . . . . . . . . . . . . . . . . . . . . 388

Responsible Spin of Data and Information . . . . . 389

Persuasion and Information . . . . . . . . . . . . . . . 389

Information and Sensemaking . . . . . . . . . . . . 391

 

 

 

xi

© Lynne Nicholson/Shutterstock

List of Figures Figure 1-1 The Planning and Evaluation Cycle

Figure 1-2 The Public Health Pyramid

Figure 1-3 The Pyramid as an Ecological Model

Figure 2-1 Effects of Diversity Throughout the Planning and Evaluation Cycle Stage in the Planning and Evaluation Cycle

Figure 3-1 Connections Among Program, Agency, and Community

Figure 3-2 Venn Diagram of Community- Based, Community-Focused, and Community-Driven

Figure 3-3 The Planning and Evaluation Cycle

Figure 4-1 Generic Model of a Theory of Causes

Figure 4-2 Diagram of Theory of Causes/ Determinants of Receiving Immunizations, as Contributing to Adult Immunization Rates, Using the Layetteville Example

Figure 4-3 Diagram of Theory of Causes/ Determinants for Deaths from Gunshot Wounds, as Contributing to Adolescent Death Rates, Using the Layetteville Example

Figure 4-4 Diagram of Theory of Causes/ Determinants for Neural Tube Defects, as Contributing to Rates of Congenital Anomalies, Using the Bowe County Example

Figure 4-5 Theory of Causes/Determinants with Elements of the BPRS Score: Size, Seriousness, and Interventions

Figure 5-1 Model of Program Theory

Figure 5-2 The Effect Theory Showing the Causal Theory Using Community Diagnosis Elements

Figure 5-3 Effect Theory Example: Effect Theory for Reducing the Rate of Congenital Anomalies

Figure 5-4 Two Roots of Program Failure

Figure 6-1 Using Elements of Program Theory as the Basis for Writing Program Objectives

Figure 6-2 Diagram Showing Relationship of Effect Theory Elements to Process and Outcome Objectives

Figure 6-3 Calculations of Options 1 Through 4 Using a Spreadsheet

Figure 6-4 Calculations of Options 5 Through 8 Using a Spreadsheet

Figure 6-5 Calculations of Options 9 and 10 Using a Spreadsheet

Figure 7-1 Amount of Effort Across the Life of a Health Program

Figure 7-2 Diagram of the Process Theory Elements Showing the Components of the Organizational Plan and Services Utilization Plan

Figure 7-3 Process Theory for Neural Tube Defects and Congenital Anomalies Health Problem

Figure 7-4 Effect and Process Theory for Neural Tube Defect Prevention Program

Figure 8-1 Relevance of Process Theory to Economic Evaluations

Figure 8-2 Information System Processes Throughout the Program Planning Cycle

Figure 9-1 Elements of the Process Theory Included in a Process Evaluation

Figure 9-2 Roots of Program Failure

 

 

xii List of Figures

Figure 12-2 Decision Tree for Choosing an Evaluation Design, Based on the Design’s Typical Use

Figure 12-3 Three Sources of Program Failure

Figure 13-1 Probability and Nonprobability Samples and Their Usage

Figure 14-1 Contributing Factors to the Total Amount of Change

Figure 14-2 Summary of the Three Decisions for Choosing an Analytic Approach

Figure 14-3 Five Ways That the Rate of Change Can Be Altered

Figure 16-1 Making Recommendations Related to the Organizational and Services Utilization Plans

Figure 16-2 Making Recommendations Related to the Program Theory

Figure 16-3 The Planning and Evaluation Cycle with Potential Points for Recommendations

Figure 9-3 Examples of Organizational Plan Inputs and Outputs That Can Be Measured

Figure 9-4 Examples of Services Utilization Inputs and Outputs That Can Be Measured

Figure 10-1 List of Quality Improvement Tools with Graphic Examples

Figure 11-1 Planning and Evaluation Cycle, with Effect Evaluation Highlights

Figure 11-2 Diagram of Net Effects to Which Measures Need to Be Sensitive

Figure 11-3 Using the Effect Theory to Identify Effect Evaluation Variables

Figure 11-4 Effect Theory of Reducing Congenital Anomalies Showing Variables

Figure 12-1 Relationship Between the Ability to Show Causality and the Costs and Complexity of the Design

 

 

xiii

© Lynne Nicholson/Shutterstock

Table 4-3 Global Leading Causes of Disability- Adjusted Life-Years (DALYs) and Years of Life Lost (YLL)

Table 4-4 Numerators and Denominators for Selected Epidemiological Rates Commonly Used in Community Health Assessments

Table 4-5 Existing Factors, Moderating Factors, Key Causal Factors, Mediating Factors, and Health Outcome and Impact for Five Health Problems in Layetteville and Bowe County

Table 4-6 Relationship of Problem Definition to Program Design and Evaluation

Table 4-7 Criteria for Rating Problems According to the BPRS

Table 4-8 Program Prioritization Based on the Importance and Changeability of the Health Problem

Table 4-9 Examples of Sources of Data for Prioritizing Health Problems at Each Level of the Public Health Pyramid

Table 4-10 Examples of Required Existing, Causal, and Moderating Factors Across the Pyramid

Table 5-1 Examples of Interventions by Type and Level of the Public Health Pyramid

Table 5-2 Comparison of Effect Theory, Espoused Theory, and Theory-in-Use

Table 5-3 Examples of Types of Theories Relevant to Developing Theory of Causative/Determinant Factors or Theory of Intervention Mechanisms by Four Health Domains

List of Tables Table 1-1 Comparison of Outcome-Focused,

Utilization-Focused, and Participatory Focused Evaluations

Table 1-2 Evaluation Standards Established by the Joint Commission on Standards for Educational Evaluation

Table 1-3 Fuzzy Aspects Throughout the Planning and Evaluation Cycle

Table 1-4 A Summary of the Healthy People 2020 Priority Areas

Table 2-1 Examples of Cultural Tailoring Throughout the Program Planning and Evaluation Cycle

Table 2-2 Indicators Used to Measure Race in Different Surveys

Table 2-3 Professional Diversity Among Health Professions

Table 2-4 Cultural Continuum with Examples of the Distinguishing Features of Each Stage

Table 3-1 Three Elements of Community, with Their Characteristics

Table 3-2 Summary of the Six Approaches to Planning, with Public Health Examples

Table 3-3 Comparison of Models Developed for Public Health Planning

Table 3-4 A Comparison of the Five Perspectives on Community Health and Needs Assessment

Table 4-1 Haddon’s Typology for Analyzing an Event, Modified for Use in Developing Health Promotion and Prevention Programs

Table 4-2 Quality-of-Life Acronyms and Definitions

 

 

xiv List of Tables

Table 5-4 Examples of Types of Theories Relevant to Developing the Organizational Plan and Services Utilization Plan Components of the Process Theory

Table 6-1 Aspects of Process Objectives as Related to Components of the Process Theory, Showing the TAAPS Elements

Table 6-2 Domains of Individual or Family Health Outcomes with Examples of Corresponding Indicators and Standardized Measures

Table 6-3 Bowe County Health Problems with Indicators, Health Outcomes, and Health Goals

Table 6-4 Effect Objectives Related to the Theory of Causal/Determinant Factors, Theory of the Intervention Mechanisms, and Theory of Outcome to Impact, Using Congenital Anomalies as an Example, Showing the TREW Elements

Table 6-5 Effect Objectives Related to the Theory of Causal/Determinant Factors, Theory of the Intervention Mechanisms, and Theory of Outcome to Impact, Using Adolescent Pregnancy as an Example, Showing the TREW Elements

Table 6-6 Matrix of Decision Options Based on Current Indicator Value, Population Trend of the Health Indicator, and Value of Long-Term Objective or Standard

Table 6-7 Framework for Target Setting: Interaction of Data Source Availability and Consistency of Information

Table 6-8 Summary of When to Use Each Option

Table 6-9 Range of Target Values Derived from Options 1 Through 10, Based on the Data from Figures 6-3 Through 6-5

Table 7-1 List of Health Professionals with a Summary of Typical Legal and Regulatory Considerations

Table 7-2 Relationship of Test Sensitivity and Specificity to Overinclusion and Underinclusion

Table 7-3 Examples of Partial- and Full-Coverage Programs by Level of the Public Health Pyramid

Table 7-4 Template for Tracking Services Utilization Outputs Using Example Interventions and Hypothetical Activities

Table 7-5 Hypothetical Logic Model of a Program for Reducing Congenital Anomalies

Table 7-6 Generic Elements of a Business Plan, with Their Purpose and Corresponding Element of the Process Theory and Logic Model

Table 8-1 Formulas Applied for Options A and B

Table 9-1 Methods of Collecting Process Evaluation Data

Table 9-2 Example of Measures of Inputs and Outputs of the Organizational Plan

Table 9-3 Examples of Measures of Inputs and Outputs of the Services Utilization Plan

Table 9-4 Matrix of Undercoverage, Ideal Coverage, and Overcoverage

Table 9-5 Examples of Process Evaluation Measures Across the Public Health Pyramid

Table 10-1 Types of Program Accountability, with Definitions and Examples of Process Evaluation Indicators

Table 10-2 Comparison of Improvement Methodologies and Program Process Evaluation

Table 10-3 Definitions of Terms Used in Performance Measurement

Table 10-4 Partial List of Existing Performance Measurement Systems Used by Healthcare Organizations, with Their Websites

Table 11-1 Three Levels of Intervention Effect Evaluations

Table 11-2 Differences Between Evaluation and Research

 

 

xvList of Tables

Table 11-3 Advantages and Disadvantages of Using Each Type of Variable

Table 11-4 Examples of Nominal, Ordinal, and Continuous Variables for Different Health Domains

Table 11-5 Example Time Line Showing the Sequence of Intervention and Evaluation Activities

Table 11-6 Summary of Evaluation Elements

Table 12-1 Contribution of Various Disciplines to Health Program Evaluation

Table 12-2 Summary of Main Designs and Their Use for Individual or Population-Level Evaluations

Table 12-3 Approaches to Minimizing Each of the Three Types of Program Failure

Table 13-1 Probability and Nonprobability Samples and Their Usage

Table 13-2 Comparison of Main Types of Samples with Regard to Implementation Ease, Degree of Representativeness, and Complexity of Sampling Frame

Table 13-3 Example of Data Sources for Each Health and Well-Being Domain

Table 13-4 Interaction of Response Bias and Variable Error

Table 14-1 Calculation of Effectiveness and Adequacy Indices: An Example

Table 14-2 Intervention Efficiency as a Relation of Effect Size and Causal Size

Table 14-3 Factors That Affect the Choice of a Statistical Test: Questions to Be Answered

Table 14-4 Analysis Procedures by Level of Intervention and Level of Analysis

Table 14-5 Commonly Used Parametric and Nonparametric Statistical Tests for Comparison, Association, and Prediction

Table 14-6 Main Types of Comparison Analyses Used by Level of Analysis and Assuming That the Variables Are at the Same Level of Measurement

Table 14-7 Main Types of Association Analyses Used by Level of Analysis, Assuming That Variables Are the Same Level of Measurement

Table 14-8 Example of Statistical Tests for Strength of Association by Level of Measurement, Using Layetteville Adolescent Antiviolence Program

Table 14-9 Examples of Statistical Tests by Evaluation Design and Level of Measurement, with Examples of Variables

Table 14-10 Main Types of Prediction Analyses Used by Level of Analysis, Assuming That Variables Are at the Same Level of Measurement

Table 15-1 Comparison of Qualitative Perspectives with Regard to the Basic Question Addressed and the Relevance to Health Program Planning and Evaluation

Table 15-2 Comparison of Major Qualitative Perspectives with Regard to the Method Used

Table 15-3 Summary of Key Benefits and Challenges to Using Qualitative Methods in Planning and Evaluation

Table 15-4 Sampling Considerations for Each of the Qualitative Methods Discussed

Table 15-5 Summary of Types of Sampling Strategies Used with Qualitative Designs

Table 15-6 Example of Interview Text with Final Coding

Table 15-7 Suggested Qualitative Methods by Pyramid Level and Planning Cycle

Table 16-1 Ethical Frameworks and Principles for Planning Health Programs

Table 16-2 Comparison of Types of IRB Reviews

Table 16-3 Eight Elements of Informed Consent, as Required in 45 CFR 46

Table 16-4 Effect of Rigor and Importance of Claims on Decision Making

Table 16-5 List of Ways to Make Graphs More Interpretable

Table 16-6 Examples of Dissemination Modes, Audiences, and Purposes

 

 

 

xvii

© Lynne Nicholson/Shutterstock

Exhibit 8-3 Break-Even Table Shows Number of Paying Students Needed to Break Even

Exhibit 8-4 Example of a Budget Showing Year-to-Date Variance

Exhibit 8-5 Types of Cost Analyses

Exhibit 9-1 Formulas for Measures of Coverage

Exhibit 9-2 Example of Narrative Background about Coverage and Dosage Measures

Exhibit 9-3 Examples of Coverage Measures Using an Excel Spreadsheet

Exhibit 9-4 Examples of Calculating Dosage for the Congenital Anomalies Prevention Program Using Excel

List of Exhibits Exhibit 2-1 Checklist to Facilitate Development

of Cultural and Linguistic Competence Within Healthcare Organizations

Exhibit 2-2 Checklist to Facilitate Cultural Competence in Community Engagement

Exhibit 7-1 Example of an Abbreviated Time Line for a Short-Term Health Program

Exhibit 7-2 Chapter Text Paragraph Rewritten at an Eighth-Grade Reading Level

Exhibit 8-1 Example of a Scenario Needing a Break-Even Analysis

Exhibit 8-2 Example of a Budget Used for a Break-Even Analysis for Bright Light on an Excel Spreadsheet

 

 

 

xix

© Lynne Nicholson/Shutterstock

the text are relevant to health administrators, medical social workers, nurses, nutritionists, pharmacists, public health professionals, physical and occupational therapists, and physicians.

This textbook grew from teaching experi- ences with both nurses and public health students and their need for direct application of the pro- gram planning and evaluation course content to their work and to their clients and communities. Today programs need to be provided through community-based healthcare settings to address broad public health issues and expand the individ- ual to population focus. The distinction between individual patient health and population health is a prerequisite for the thinking and planning—in terms of aggregates and full populations—by students from clinical backgrounds.

In most graduate health professions programs, students take a research methods course and a statistics course. Therefore, this evaluation text avoids duplicating content related to research methods and statistics while addressing and extending that content into health program de- velopment, implementation, and evaluation. In addition, because total quality management and related methodologies are widely used in healthcare organizations, areas of overlap between quality improvement methodologies and traditional program evaluation approaches are discussed. This includes ways that quality improvement methodologies complement program evaluations. Sometimes evaluations are appropriate; sometimes they are not. Enthusiasm for providing health programs and performing evaluation is tempered with thoughtful notes of caution in the hope that students will avoid potentially serious and costly program and evaluation mistakes.

Preface to the Fourth Edition The fourth edition of Health Program Planning and Evaluation has stayed true to the purpose and intent of the previous editions. This advanced- level text is written to address the needs of professionals from diverse health disciplines who find themselves responsible for developing, implementing, or evaluating health programs. The aim of the text is to assist health profes- sionals to become not only competent health program planners and evaluators but also savvy consumers of evaluation reports and prudent users of evaluation consultants. To that end, the text includes a variety of practical tools and concepts necessary to develop and evaluate health programs, presenting them in language understandable to both the practicing and novice health program planner and evaluator.

Health programs are conceptualized as encompassing a broad range of programmatic interventions that span the social-ecological range, from individual-level to population-level programs. Examples of programs cited through- out the text are specific yet broadly related to improving health and reflect the breadth of public health programs. The examples have been updated once again to reflect current best practices. Maintaining a public health focus provides an opportunity to demonstrate how health programs can target different levels of a population, different determinants of a health problem, and different strategies and interven- tions to address a health problem. In addition, examples of health programs and references are selected to pique the interests of the diverse students and practicing professionals who con- stitute multidisciplinary program teams. Thus, the content and examples presented throughout

 

 

xx Preface to the Fourth Edition

Articulating each of the component elements of the program theory sharpens the student’s awareness of what must be addressed to create an effective health program. One element of the program theory is the effect theory, which focuses on how the intervention results in the program effects. The effect theory had its genesis in the concepts of action and intervention hypotheses described by Rossi and Freeman; those concepts were dropped from later editions of their text. We believe these authors were onto something with their effort to elucidate the various path- ways leading from a problem to an effect of the program. Rossi and colleagues’ ideas have been updated with the language of moderating and mediating factors and an emphasis on the intervention mechanisms.

Throughout the current edition of this textbook, emphasis is given to the effect theory portion of the program theory. The effect theory describes relationships among health antecedents, causes of health problems, program interventions, and health effects. The hypotheses that comprise the effect theory need to be understood and ex- plicated to plan a successful health program and to evaluate the “right” elements of the program. The usefulness of the effect theory throughout the planning and evaluation cycle is highlighted throughout this text; for example, the model is used as a means of linking program theory to evaluation designs and data collection. The model becomes an educational tool by serving as an example of how the program theory is manifested throughout the stages of planning and evaluation, and by reinforcing the value of carefully articulating the causes of health problems and consequences of programmatic interventions. Students and novice program planners may have an intuitive sense of the connection between their actions and outcomes, but they may not know how to articulate those connections in ways that program stakeholders can readily grasp. The effect theory and the process theory—the other main element of the program theory—provide a basis from which to identify and describe these connections.

▸ Unique Features The Fourth Edition has retained the three unique features that distinguish this text from other program planning and evaluation textbooks: use of the public health pyramid, consistent use of a model of the program theory throughout the text, and role modeling of evidence-based practice.

The public health pyramid explains how health programs can be developed for individu- als, aggregates, populations, and service delivery systems. Use of the pyramid is also intended as a practical application of the social-ecological per- spective that acknowledges a multilevel approach to addressing health problems. The public health pyramid contains four levels: direct services to individuals; enabling services to aggregates; services provided to entire populations; and, at the base, infrastructure. In this textbook, the pyramid is used as an organizing structure to summarize the content of each chapter in the “Across the Pyramid” sections. In these sections, specific attention is paid to how key concepts in a given chapter might vary across each pyramid level. Summarizing the chapter content in this manner reinforces the perspective that enhancing health and well-being requires integrated efforts across the levels of the public health pyramid. Health program development and evaluation is relevant for programs intended for individuals, aggregates, populations, and service delivery systems, and this fact reinforces the need to tailor program plans and evaluation designs to the level at which the program is conceptualized. Using the pyramid also helps health professionals begin to value their own and others’ contributions within and across the levels and to transcend disciplinary boundaries.

The second unique feature of this text is that one conceptual model of program planning and evaluation is used throughout the text: the program theory. The program theory is like a curricular strand, connecting content across the chapters, and activities throughout the planning and evaluation cycle. The program theory, as a conceptual model, is composed of elements.

 

 

xxiPreface to the Fourth Edition

The third unique feature of this text is the intentional role modeling of evidence-based practice. Use of published, empirical evidence as the basis for practice—whether clinical practice or program planning practice—is the professional standard. Each chapter of this book contains substantive examples drawn from the published scientific health and health-related literature. Relying on the literature for examples of programs, evaluations, and issues is consistent with the espoused preference of using scientific evidence as the basis for making programmatic decisions. Each chapter offers multiple examples from the health sciences literature that substan- tiate the information presented in the chapter.

▸ Organization of the Book

The book is organized into six sections, each covering a major phase in the planning and eval- uation cycle. Chapter 1 introduces the fictitious city of Layetteville and the equally fictitious Bowe County. In subsequent chapters, chapter content is applied to the health problems of Layetteville and Bowe County so that students can learn how to use the material on an ongoing basis. In several chapters, the case study is used in the “Discussion Questions and Activities” section to provide students with an opportunity to practice applying the chapter content. In recognition of the availability of parts of the text in digital format, each use of the Layetteville case stands on its own in reference to the chapter’s content.

Section I explores the context in which health programs and evaluations occur. Chap- ter 1 begins with an overview of definitions of health, followed by a historical context. The public health pyramid is introduced and pre- sented as an ecological framework for thinking of health programs. An overview of community is provided and discussed as both the target and the context of health programs. The role of community members in health programs and

evaluations is introduced, and emphasis is given to community as a context and to strategies for community participation throughout the program development and evaluation process. Chapter 2 focuses on the role of diversity in the planning and evaluation cycle and its effects on the delivery and evaluation of health programs. Although a discussion of diversity-related issues could have been added to each chapter, the sensitive nature of this topic and its importance in ensuring a successful health program warranted it being covered early in the text and as a separate chapter. Cultural competence is discussed, particularly with regard to the organization providing the health program and with regard to the program staff members.

Section II contains two chapters that focus on the task of defining the health problem. Chapter 3 covers planning perspectives and the history of health program planning. Effective health program developers understand that approaches to planning are based on assump- tions. These assumptions are exemplified in six perspectives that provide points of reference for understanding diverse preferences for prioritizing health needs and expenditures and therefore for tailoring planning actions to fit the situation best. Chapter 3 also reviews perspectives on conducting a community needs assessment as foundational to decision making about the future health program. Essential steps involved in conducting a community health and needs assessment are outlined as well.

Chapter 4 expands on key elements of a community needs assessment, beginning with a review of the data collection methods appro- priate for a community needs assessment. This discussion is followed by a brief overview of key epidemiological statistics. Using those statistics and the data, the reader is guided through the process of developing a causal statement of the health problem. This causal statement, which includes the notion of moderating and mediating factors in the pathway from causes to problem, serves as the basis for the effect theory of the program. Once the causal statement has been

 

 

xxii Preface to the Fourth Edition

developed, prioritization of the problem is needed; four systems for prioritizing in a rational manner are reviewed in Chapter 4.

Following prioritization comes planning, beginning with the decision of how to address the health problem. In many ways, the two chapters in Section III form the heart of planning a successful health program. Unfortunately, students generally undervalue the importance of theory for selecting an effective intervention and of establishing target values for objectives. Chapter 5 explains what theory is and how it provides a cornerstone for programs and evaluations. More important, the concept of intervention is discussed in detail, with attention given to characteristics that make an intervention ideal, including attention to intervention dosage. Program theory is introduced in Chapter 5 as the basis for organizing ideas related to the selection and delivery of the interventions in conjunction. The effect theory element of the program theory is introduced and the components of the effect theory are explained. Because the effect theory is so central to having an effective program interven- tion and the subsequent program evaluation, it is discussed in conjunction with several examples from the Layetteville and Bowe County case.

Chapter 6 goes into detail about developing goals and objectives for the program, with particular attention devoted to articulating the interven- tions provided by the program. A step-by-step procedure is presented for deriving numerical targets for the objectives from existing data, which makes the numerical targets more defendable and programmatically realistic. We focus on distinguishing between process objectives and outcome objectives through the introduction of two mnemonics: TAAPS (Time frame, Amount of what Activities done by which Participants/ program Staff ) and TREW (Timeframe, what portion of Recipients experience what Extent of Which type of change).

Section IV deals with the task of imple- menting a health program. Chapter 7 provides an in-depth review of key elements that consti- tute the process theory element of the program theory—specifically, the organizational plan and services utilization plan. The distinction between

inputs and outputs of the process theory is high- lighted through examples and a comprehensive review of possible inputs and outputs. Budgeting for program operations is covered in this chapter as well. Chapter 8 is devoted entirely to fiscal data systems, including key aspects of budgeting, and informatics. Chapter 9 details how to evaluate the outputs of the organizational plan and the services utilization plan. The practical application of mea- sures of coverage is described, along with the need to connect the results of the process evaluation to programmatic changes. Program management for assuring a high-quality program that delivers the planned intervention is the focus of Chapter 10.

Section V contains chapters that are specific to conducting the effect evaluations. These chap- ters present both basic and advanced research methods from the perspective of a program effect evaluation. Here, students’ prior knowledge about research methods and statistics is brought together in the context of health program and services evaluation. Chapter 11 highlights the importance of refining the evaluation question and provides information on how to clarify the question with stakeholders. Earlier discussions about program theory are brought to bear on the development of the evaluation question. Key issues, such as data integrity and survey construction, are addressed with regard to the practicality of program evaluation. Chapter 12 takes a fresh approach to evaluation design by organizing the traditional experimental and quasi-experimental designs and epidemiological designs into three levels of program evaluation design based on the design complexity and purpose of the evaluation. The discussion of sampling in Chapter 13 retains the emphasis on practicality for program evalua- tion rather than taking a pure research approach. However, sample size and power are discussed because these factors have profound relevance to program evaluation. Chapter 14 reviews sta- tistical analysis of data, with special attention to variables from the effect theory and their level of measurement. The data analysis is linked to interpretation, and students are warned about potential flaws in how numbers are understood. Chapter 15 provides a review of qualitative designs

 

 

xxiiiPreface to the Fourth Edition

and methods, especially their use in health pro- gram development and evaluation.

The final section, Section VI, includes just one chapter. Chapter 16 discusses the use of evaluation results when making decisions about existing and future health programs. Practical and conceptual issues related to the ethics issues that program evaluators face are addressed. This chapter also reviews ways to assess the quality of evaluations and the professional responsibilities of evaluators.

Each chapter in the book concludes with a “Discussion Questions and Activities” section. The questions posed are intended to be provoc- ative and to generate critical thinking. At the graduate level, students need to be encouraged to engage in independent thinking and to foster their ability to provide rationales for decisions. The discussion questions are developed from this point of view. In the “Internet Resources” section, links are provided to websites that support the content of the chapter. These websites have been carefully chosen as stable and reliable sources.

▸ Additions to and Revisions in the Fourth Edition

The fourth edition of Health Program Planning and Evaluation represents continuous improve- ment, with corrections and updated references. Classical references and references that remain state of the art have been retained.

The Fourth Edition has retained the original intent—namely, to provide students with the ability to describe a working theory of how the intervention acts upon the causes of the health problem and leads to the desired health results. Some content has been condensed in order to allow enough room to describe current evaluation approaches adequately for both new and experi- enced practitioners. For instance, Chapter 1 now includes participatory evaluations in addition to outcome- and utilization-focused evaluations. In addition to disciplines traditionally recognized

in western medical care, Chapter 2 now includes acupuncture and massage therapy as examples of health professional diversity. Discussion of the nuances of cultural competency has been refined, in light of the continuing importance and challenges of this area. Community strengths have been given more attention in Chapter 3 in recognition of the powerful potential of shifting from a “deficit-based” to an “asset-based” perspec- tive on health planning. Chapter 4 now devotes greater attention to the health evaluation poten- tial of data from social media such as Facebook and Twitter, as well as geospatial data, including attendant concerns about privacy, and also notes implications of the increasingly prevalent public rankings of community health status. Examples of infrastructure-level interventions within the public health pyramid have been added in Chapter 5. Discussion of financial modeling options in Chapter 8 now includes simulation modeling, an exciting if also resource-intensive option to conducting real-world experiments, which are, of course, inevitably expensive themselves. Chapters 9 and 15 include emerging data collection techniques such as participant self-reports, video, photos, and audio recordings that may make public health evaluation more inclusive of the people such interventions seek to serve. Chap- ter 13 includes updates on surveying, reflecting the decreased numbers of people with land-line phones, long a mainstay of health evaluations. Options for online surveying have been updated in Chapter 14; given the rapid evolution of big data such as those available from social media, billing, and medical records, discussion of this topic has been updated in Chapter 13 as well. Finally, Chapter 16 now includes bioethics— the application of ethical and philosophical principles to medical decision making—as an increasingly salient component of responsible health evaluation.

In sum, we have worked hard to sustain this book’s conceptual and empirical rigor and currency in the Fourth Edition while maintaining accessibility for a range of health evaluators. Above all, we hope this book is useful to our readers’ vitally important efforts to improve health.

 

 

 

xxv

© Lynne Nicholson/Shutterstock

Acknowledgments We are indebted to the many people who supported and aided us in preparing this fourth edition of Health Program Planning and Evaluation: A Practical, Systematic Approach for Community Health. We remain grateful to the numerous students over the years who asked questions that revealed the typical sticking points in their acquiring and understanding of the concepts and content, as well as where new explanations were needed. Through their eyes we have learned there is no one way to explain a complex notion or process. Their interest and enthusiasm for planning and evaluating health programs was a great motivator for writing this book.

Several additional colleagues helped fine-tune this text. We are especially indebted to Arden

Handler at the School of Public Health, University of Illinois at Chicago, for taking time to contribute to this textbook. Her devotion to quality and clarity has added much to the richness of otherwise dry material. We remain deeply indebted to Deborah Rosenberg, also at the School of Public Health University of Illinois at Chicago, for sharing her innovative and quintessentially useful work on developing targets for program objectives. Special thanks as well to Joseph Chen, at the University of Texas School of Public Health, for his many contributions to updating the literature cited across many chapters and for his contribution on big data. Last, but not least, thanks to Mike Brown, publisher at Jones & Bartlett Learning, for his encouragement and patience over the years.

 

 

 

xxvii

© Lynne Nicholson/Shutterstock

DHHS U.S. Department of Health and Human Services

DSM-5 Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition

EBM Evidence-based medicine

EBP Evidence-based practice

EHR Electronic health record

EMR Electronic medical record

FTE Full-time equivalent

GAO U.S. Government Accountability Office

GNP Gross Product

GPRA Government Performance and Results Act

HEDIS Healthcare Effectiveness Data and Information Set

HIPAA Health Insurance Portability and Accountability Act

HIT Health information technology

HMOs Health maintenance organizations

HRQOL Health-related quality of life

HRSA Health Resources and Services Administration (part of DHHS)

i-APP Innovation–Adolescent Preventing Pregnancy (Program)

ICC Intraclass correlation

IRB Institutional review board

JCAHO Joint Commission on the Accreditation of Healthcare Organizations

MAPP Mobilizing for Action through Planning and Partnership

MBO Management by objectives

List of Acronyms ABCD Asset-based community

development

ACA Affordable Care Act

AEA American Evaluation Association

AHRQ Agency for Healthcare Research and Quality

ANOVA Analysis of variance

APHA American Public Health Association

BPRS Basic priority rating system

BRFSS Behavioral Risk Factor Surveillance System

BSC Balanced Score Card

CAHPS Consumer Assessment of Health Plans

CARF Commission on Accreditation of Rehabilitation Facilities

CAST-5 Capacity Assessment of Title-V

CBA Cost–benefit analysis

CBPR Community-based participatory research

CDC Centers for Disease Control and Prevention

CEA Cost-effectiveness analysis

CER Cost-effectiveness ratio

CFIR Consolidated Framework for Implementation Research

CFR Code of Federal Regulations

CHIP Community Health Improvement Process

CI Confidence interval

CPT Current Procedural Terminology

CQI Continuous quality improvement

CUA Cost–utility analysis

DALY Disability-adjusted life-year

 

 

xxviii List of Acronyms

PSA Public service announcement

QALY Quality-adjusted life-year

RAR Rapid assessment and response

RARE Rapid assessment and response and evaluation

RE-AIM Reach, Effectiveness, Adoption, Implementation, and Maintenance model

RR Relative risk

SAMHSA Substance Abuse and Mental Health Services Administration

SCHIP State Child Health Insurance Program

SES Socioeconomic status

SMART Specific, measurable, achievable, realistic, and time (objective)

TAAPS Time frame, Amount of what Activities done by which Participants/program Staff

TQM Total quality management

TREW Time frame, what portion of Recipients experience what Extent of Which type of change

UOS Units of service

WHO World Health Organization

WIC Special Supplemental Nutrition Program for Women, Infants, and Children

YHL Years of healthy life

YLL Years of life lost

YPLL Years of potential life lost

MCHB Maternal and Child Health Bureau (part of HRSA)

NACCHO National Association of City and County Health Officers

NAMI National Alliance on Mental Illness

NCHS National Center for Health Statistics

NCQA National Commission on Quality Assurance

NFPS National Family Planning Survey

NHANES National Health and Nutrition Examination Survey

NHIS National Health Interview Survey

NIH National Institutes of Health

NPHPS National Public Health Performance Standards

OHRP Office for Human Research Protections

OMB Office of Management and Budgeting

OR Odds ratio

PACE-EH Protocol for Assessing Excellence in Environmental Health

PAHO Pan American Health Organization

PDCA Plan-Do-Check-Act

PEARL Property, economic, acceptability, resource, legality system

PERT Program Evaluation and Review Technique

PPIP Putting Prevention into Prevention

PRECEDE Predisposing, Reinforcing, and Enabling Factors in Community Education Development and Evaluation (model)

 

 

SECTION I

The Context of Health Program Development

© Lynne Nicholson/Shutterstock

 

 

 

3

© Lynne Nicholson/Shutterstock

Context of Health Program Development and Evaluation

Health is not a state of being that can easily be achieved through isolated, uninformed, individualistic actions. Health of individ- uals, of families, and of populations is a state in which physical, mental, and social well-being are integrated to enable optimal functioning. From this perspective, achieving and maintaining health across a life span is a complex, complicated, intri- cate affair. For some, health is present irrespective of any special efforts or intention. For most of us, health requires, at a minimum, some level of attention and specific information. It is through health programs that attention is given focus and information is provided or made available, but that does not guarantee that the attention and information are translated into actions or behaviors needed to achieve health. Thus, those providing health programs, however large or small, need to understand both the processes whereby those in need of attention and health information can receive what is needed, and also the processes by which to learn from the experience of providing the health program.

The processes and effects of health pro- gram planning and evaluation are the subjects of this text. The discussion begins here with a brief overview of the historical context. This background sets the stage for appreciating the considerable number of publications on the topic of health program planning and evaluation, and for acknowledging the professionalization of evaluators. The use of the term processes to describe the actions involved in health program planning and evaluation is intended to denote action, cycles, and open-endedness. This chapter introduces the planning and evaluation cycle, and the interactions and iterative nature of this cycle are stressed throughout the text. Because health is an individual, aggregate, and population phenomenon, health programs need to be conceptualized across those levels. The public health pyramid, introduced in this chapter, is used throughout the text as a tool for conceptualizing and actualizing health programs for individuals, aggregates, and populations.

CHAPTER 1

 

 

4 Chapter 1 Context of Health Program Development and Evaluation

▸ History and Context An appropriate starting point for this text is reflecting on and understanding what “health” is, along with having a basic appreciation for the genesis of the fields of health program planning and evaluation. A foundation in these elements is key to becoming an evaluation professional.

Concept of Health To begin the health program planning and evaluation cycle requires first reflecting on the meaning of health. Both explicit and implicit meanings of health can dramatically influence what is considered the health problem and the subsequent direction of a program. The most widely accepted definition of health is that put forth by the World Health Organization (WHO), which for the first time defined health as more than the absence of illness and as the presence of well-being (WHO, 1947).

Since the publication of the WHO defini- tion, health has come to be viewed across the health professions as a holistic concept that encompasses the presence of physical, mental, developmental, social, and financial capabil- ities, assets, and balance. This idea does not preclude each health profession from having a particular aspect of health to which it primarily contributes. For example, a dentist contributes primarily to a patient’s oral health, knowing that the state of the patient’s teeth and gums has a direct relationship to his or her physical and social health. Thus the dentist might say that the health problem is caries. The term health problem is used, rather than illness, diagnosis, or pathology, in keeping with the holistic view that there can be problems, deficits, and pathologies in one component of health while the other components remain “healthy.” Using the term health problem also makes it easier to think about and plan health programs for aggregates of individuals. A community, a family, and a school can each have a health problem that is the focus of a health program intervention. The extent to which the health program planners have

a shared definition of health and have defined the scope of that definition influences the nature of the health program.

Health is a matter of concern for more than just health professionals. For many Americans, the concept of health is perceived as a right, along with civil rights and liberties. The right to health is often translated by the public and politicians into the perceived right to have or to access health care. This political aspect of health is the genesis of health policy at the local, federal, and international levels. The extent to which the political nature of health underlies the health problem of concern being programmatically addressed also influences the final nature of the health program.

Health Programs, Projects, and Services What distinguishes a program from a project or from a service can be difficult to explain, given the fluidity of language and terms. The term program is fairly generic but generally connotes a structured effort to provide a specific set of services or interventions. In contrast, a project often refers to a time-limited or experimental effort to provide a specific set of services or interventions through an organizational struc- ture. In the abstract, a service can be difficult to define but generally includes interaction between provider and client, an intangibility aspect to what is provided, and a nonpermanence or transitory nature to what is provided. Using this definition of service, it is easy to see that what is provided in a health program qualifies as a service, although it may not be a health service.

A health program is a totality of an organized structure designed for the provision of a fairly discrete health-focused intervention, where that intervention is designed for a specific target audience. By comparison, health services are the organizational structures through which providers interact with clients or patients to meet the needs or address the health problems of the clients or patients. Health programs, particularly

 

 

History and Context 5

in public health, tend to provide educational services, have a prevention focus, and deliver services that are aggregate or population-focused. In contrast, health services exist exclusively as direct services. Recognizing the distinction between health programs and health services is important for understanding the corresponding unique planning and evaluation needs of each.

History of Health Program Planning and Evaluation The history of planning health programs has a different lineage than that of program evaluation. Only relatively recently, in historical terms, have these lineages begun to overlap, with resulting synergies. Planning for health programs has the older history, if public health is consid- ered. Rosen (1993) argued that public health planning began approximately 4,000 years ago with planned cities in the Indus Valley that had covered sewers. Particularly since the Industrial Revolution, planning for the health of populations has progressed, and it is now considered a key characteristic of the discipline of public health.

Blum (1981) related planning to efforts undertaken on behalf of the public well-being to achieve deliberate or intended social change as well as providing a sense of direction and alternative modes of proceeding to influence social attitudes and actions. Others (Dever, 1980; Rohrer, 1996; Turnock, 2004) have similarly defined planning as an intentional effort to create something that has not occurred previously for the betterment of others and for the purpose of meeting desired goals. The purpose of planning is to ensure that a program has the best possible likelihood of being successful, defined in terms of being effective with the least possible resources. Planning encompasses a variety of activities undertaken to meet this purpose.

The quintessential example of planning is the development and use of the Healthy People goals. In 1979, Healthy People (U.S. Department of Health, Education, and Welfare [DHEW], 1979) was published as an outgrowth of the

need to establish an illness prevention agenda for the United States. The companion publica- tion, Promoting Health/Preventing Disease (U.S. Department of Health and Human Services [DHHS], 1980), marked the first time that goals and objectives regarding specific areas of the nation’s health were made explicit, with the expectation that these goals would be met by the year 1990. Healthy People became the framework for the development of state and local health promotion and disease prevention agendas. Since its initial publication, the U.S. goals for national health have been revised and published as Healthy People 2000 (DHHS, 1991), Healthy Communities 2000 (American Public Health Association [APHA], 1991), Healthy People 2010 (DHHS, 2000), and Healthy People 2020 (DHHS, 2011), with development of Healthy People 2030 underway. Other nations also set health status goals and international organizations, such as the World Health Organization (WHO) and Pan American Health Organization (PAHO), develop health goals applicable across nations.

The evolution of Healthy People goals also reflects the accelerating rate of emphasis on nationwide coordination of health promotion and disease prevention efforts and a reliance on sys- tematic planning to achieve this coordination. The development of the Healthy People publications also reflects the underlying assumption that planning is a rational activity that can lead to results. However, at the end of each 10-year cycle, many of the U.S. health objectives were not achieved, reflecting the potential for planning to fail. Given this failure potential, this text emphasizes techniques to help future planners of health programs to be more realistic in setting goals and less dependent upon a linear, rational approach to planning.

The Healthy People 1990 objectives were developed by academics and clinician experts in illness prevention and health promotion. In contrast, development of the goals and health problems listed in Healthy People 2010 and Healthy People 2020 incorporated ideas generated at public forums and through Internet commen- tary; these ideas later were revised and refined by expert panels before final publication of the

 

 

6 Chapter 1 Context of Health Program Development and Evaluation

as the basis for evaluation. Second-generation evaluations were predominantly descriptive. With the introduction in the 1960s of broad innovation and initiation of federal social service programs, including Medicare, Medicaid, and Head Start, the focus of evaluations shifted to establishing the merit and value of the programs. Because of the political issues surrounding these and similar federal programs, determining whether the social policies were having any effect on people become a priority. Programs needed to be judged on their merits and effectiveness. The U.S. General Accounting Office (GAO; now called the Government Accountability Office) had been established in 1921 for the purpose of studying the utilization of public finances, assist- ing Congress in decision making with regard to policy and funding, and evaluating government programs. The second-generation evaluation emphasis on quantifying effects was spurred, in part, by reports from the GAO that were based on the evaluations of federal programs.

Typically, the results of evaluations were not used in the “early” days of evaluating education and social programs. That is, federal health policy was not driven by whether evaluations showed the programs to be successful. Although the scientific rigor of evaluations improved, their usefulness remained minimal. Beginning in the 1980s, however, the third generation of evaluations—termed “the negotiation generation” or “the responsiveness generation”—began. During this generation, evaluators began to acknowledge that they were not autonomous and that their work needed to respond to the needs of those being evaluated. As a result of this awareness, several lineages have emerged. These lineages within the responsiveness generation account for the current diversity in types, emphases, and philosophies related to program evaluation.

One lineage is utilization-focused evaluation (Patton, 2012), in which the evaluator’s primary concern is with developing an evaluation that will be used by the stakeholders. Utilization-focused evaluations are built on the following premises (Patton, 1987): Concern for use of the evaluation pervades the evaluation from beginning to end;

objectives. Greater participation of the public during the planning stage of health programs has become the norm. In keeping with the emphasis on participation, the role and involvement of stakeholders are stressed at each stage of the planning and evaluation cycle.

The history of evaluation, from which the evaluation of health programs grew, is far shorter than the history of planning, beginning roughly in the early 1900s, but it is equally rich in important lessons for future health program evaluators. The first evaluations were done in the field of education, particularly as student assessment and evaluation of teaching strategies gained interest (Patton, 2008). Assessment of student scholastic achievement is a comparatively circumscribed outcome of an educational intervention. For this reason, early program evaluators came from the discipline of education, and it was from the fields of education and educational psychology that many methodological advances were made and statistics developed.

Guba and Lincoln (1987) summarized the history of evaluations by proposing generational milestones or characteristics that typify distinct generations. Later, Swenson (1991) built on their concept of generations by acknowledging that subsequent generations of evaluations will occur. Each generation incorporates the knowledge of early evaluations and extends that knowledge based on current broad cultural and political trends.

Guba and Lincoln (1987) called the first generation of evaluations in the early 1900s “the technical generation.” During this time, nascent scientific management, statistics, and research methodologies were used to test interventions. Currently, evaluations continue to incorporate the rationality of this generation by using activities that are systematic, science based, logical, and sequential. Rational approaches to evaluations focus on identifying the best-known interven- tion or strategy given the current knowledge, measuring quantifiable outcomes experienced by program participants, and deducing the degree of effect from the program.

The second generation, which lasted until the 1960s, focused on using goals and objectives

 

 

History and Context 7

evaluations done across similar programs. This trend in program evaluation parallels the trend in social science toward using meta-analysis of existing studies to better understand theorized relationships and the trend across the health professions toward establishing evidence-based practice guidelines. This new generation be- came possible because of a pervasive culture of evaluation in the health services and because of the availability of huge data sets for use in the meta-evaluations. An early example of the evaluation culture was the mandate from United Way, a major funder of community-based health programs, for their grantees to conduct outcome evaluations. To help grantees meet this mandate, United Way published a user-friendly manual (United Way of America, 1996) that could be used by nonprofessionals in the development of basic program evaluations. More broadly, the culture of evaluation can be seen in the explicit requirement of federal agencies that fund community-based health programs that

evaluations are aimed at the interests and needs of the users; users of the evaluation must be in- vested in the decisions regarding the evaluation; and a variety of community, organizational, political, resource, and scientific factors affect the utilization of evaluations. Utilization-focused evaluation differs from evaluations that are focused exclusively on outcomes

Another lineage is participatory evaluation (Whitmore, 1998), in which the evaluation is merely guided by the expert and is actually gen- erated by and conducted by those invested in the health problem. A participatory or empowerment approach invites a wide range of stakeholders into the activity of planning and evaluation, providing those participants with the skills and knowledge to contribute substantively to the activities and fostering their sense of ownership of the product (TABLE 1-1).

The fourth generation of evaluation, which emerged in the mid-1990s, seems to be meta-evaluation, that is, the evaluation of

TABLE 1-1 Comparison of Outcome-Focused, Utilization-Focused, and Participatory Focused Evaluations

Outcome-Focused Evaluations

Utilization-Focused Evaluations

Participatory Focused

Evaluations

Purpose Show program effect Get stakeholders to use evaluation-findings for decisions regarding program improvements and future program development

Involve the stakeholders in designing programs and evaluations, and utilizing findings

Audience Funders, researchers, other external audience

Program people (internal audience), funders

Those directly concerned with the health problem and program

Method Research methods, external evaluators (usually)

Research methods, participatory

Research methods as implemented by the stakeholders

 

 

8 Chapter 1 Context of Health Program Development and Evaluation

serves evaluators primarily in the United States. Several counterparts to the AEA exist, such as the Society for Evaluation in the United King- dom and the Australian Evaluation Society. The establishment of these professional orga- nizations, whose members are evaluators, and the presence of health-related sections within these organizations demonstrate the existence of a field of expertise and of specialized knowl- edge regarding the evaluation of health-related programs.

As the field of evaluation has evolved, so have the number and diversity of approaches that can guide the development of evaluations. Currently, 23 different approaches to evaluation have been identified, falling into 3 major groups (Stufflebeam  & Coryn, 2014). One group of evaluations is oriented toward questions and methods such as objectives-based studies and experimental evaluations. The second group of evaluations is oriented toward improvements and accountability and includes consumer-oriented and accreditation approaches. The third group of evaluations includes those that have a social agenda or advocacy approach, such as respon- sive evaluations, democratic evaluations, and utilization-focused evaluation. They also acknowl- edge pseudo-evaluations and quasi-evaluations as distinct groups, reflecting the continuing evolution of the field of evaluation.

Several concepts are common across the types of evaluations—namely, pluralism of values, stakeholder constructions, fairness and equity regarding stakeholders, the merit and worth of the evaluation, a negotiated process and outcomes, and full collaboration. These concepts have been formalized into the standards for evaluations that were established by the Joint Commission on Standards for Educational Evaluation in 1975 (American Evaluation Association, 2011). Currently, this Joint Commission includes many organizations in its membership, such as the American Evaluation Association and the American Educational Research Association.

The five standards of evaluation established by the American Evaluation Association are utility, feasibility, propriety, accuracy, and evaluation

such programs include evaluations conducted by local evaluators.

Most people have an intuitive sense of what evaluation is. The purpose of evaluation can be to measure the effects of a program against the goals set for it and thus to contribute to subsequent decision making about the program (Weiss, 1972). Alternatively, evaluation can be defined as “the use of social research methods to systematically investigate the effectiveness of social intervention programs in ways that are adapted to their political and organizational environments and are designed to inform social action to improve social conditions” (Rossi, Lipsey, & Freeman, 2004 , p. 16). Others (Herman, Morris, & Fitz-Gibbon, 1987) have defined evaluation as judging how well policies and procedures are working or as assessing the quality of a program. These definitions of evaluation all remain relevant.

Inherently these definitions of evaluation have an element of being judged against some criteria. This implicit understanding of evaluation leads those involved with the health program to feel as though they will be judged or found not to meet those criteria and will subsequently experience some form of repercussions. They may fear that they as individuals or as a program will be labeled a failure, unsuccessful, or inadequate. Such feel- ings must be acknowledged and addressed early in the planning cycle. Throughout the planning and evaluation cycle, program planners have numerous opportunities to engage and involve program staff and stakeholders in the evaluation process. Taking advantage of these opportuni- ties goes a long way in alleviating the concerns of program staff and stakeholders about the judgmental quality of the program evaluation.

▸ Evaluation as a Profession

A major development in the field of evaluation has been the professionalization of evaluators. The American Evaluation Association (AEA)

 

 

Evaluation as a Profession 9

and values held by professional evaluators and deserve attention in health program evaluations. The existence and acceptance of standards truly indicates the professionalism of evaluators.

Achieving these standards requires that those involved in the program planning and evaluation have experience in at least one aspect of planning or evaluation, whether that is experience with the health problem; experience with epidemiological, social, or behavioral science research methods; or skill in facilitating processes that involve diverse constituents, capabilities, and interests. Program planning and evaluation can be done in innumerable ways, with no single “right way.” This degree of freedom and flexibility can feel uncomfortable for some people. As with any skill or activity, until they have experience, program planners and evaluators may feel intimidated by the size of the task or by the experience of others involved. To become a professional evaluator, therefore, requires a degree of willingness to learn, to grow, and to be flexible.

accountability (TABLE 1-2; American Evaluation Association, 2011).

The utility standard specifies that an evalu- ation must be useful to those who requested the evaluation. A useful evaluation shows ways to make improvements to the intervention, increase the efficiency of the program, or enhance the possibility of garnering financial support for the program. The feasibility standard denotes that the ideal may not be practical. Evaluations that are highly complex or costly will not be done by small programs with limited capabili- ties and resources. Propriety is the ethical and politically correct component of the standards. Evaluations can invade privacy or be harmful to either program participants or program staff members. The propriety standard also holds evaluators accountable for upholding all of the other standards. Accuracy is essential and is achieved through the elements that constitute scientific rigor. These established and accepted standards for evaluations reflect current norms

TABLE 1-2 Evaluation Standards Established by the Joint Commission on Standards for Educational Evaluation

Standard Description

Utility To increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs.

Feasibility To increase evaluation effectiveness and efficiency.

Propriety To support what is proper, fair, legal, right, and just in evaluations.

Accuracy To increase the dependability and truthfulness of evaluation representations, propositions, and findings, especially those that support interpretations and judgments about quality.

Evaluation accountability

To encourage adequate documentation of evaluations and a meta-evaluative perspective focused on improvement and accountability for evaluation processes and products.

Data from American Evaluation Association (2012).

 

 

10 Chapter 1 Context of Health Program Development and Evaluation

organizations and public health agencies can be integral to achieving well-functioning programs.

External evaluators can bring a fresh perspective and a way of thinking that gener- ates alternatives not currently in the agencies’ repertoire of approaches to the health problem and program evaluation. Compared to internal evaluators, external evaluators are less likely to be biased in favor of one approach—unless, of course, they were chosen for their expertise in a particular area, which would naturally bias their perspective to some extent. External pro- gram planners and evaluators, however, can be expensive consultants. Some organizations that specialize in health program evaluations serve as one category of external evaluator. These research firms receive contracts to evaluate health program initiatives and conduct national evaluations that require sophisticated method- ology and considerable resources.

The question of who does evaluations also can be answered by looking at who funds health program evaluations. From this perspective, org- anizations that do evaluations as a component of their business are the answer to the question, Who does evaluations? Although most funding agencies prefer to fund health programs rather than stand-alone program evaluations, some exceptions exist. For example, the Agency for Healthcare Research and Quality (AHRQ) funds health services research about the quality of medical care, which is essentially effect evalu- ation research. Other federal agencies, such as the National Institutes of Health and the bureaus within the Department of Health and Human Services, fund evaluation research of pilot health programs. However, the funding priorities of these federal agencies change to be consistent with federal health policy. This is a reminder that organizations funding and conducting health program evaluations evolve over time.

Roles of Evaluators Evaluators may be required to take on various roles, given that they are professionals involved in a process

Who Does Planning and Evaluations? Many different types of health professionals and social scientists can be involved in health program planning and evaluation. At the out- set of program planning and evaluation, some trepidation revolves around who ought to be the planners and evaluators. In a sense, almost anyone with an interest and a willingness to be an active participant in the planning or evalua- tion process could be involved, including health professionals, businesspersons, paraprofessionals, and advocates or activists.

Planners and evaluators may be employees of the organization about to undertake the ac- tivity, or they may be external consultants hired to assist in all phases or just a specific phase of the planning and evaluation cycle. Internal and external planners and evaluators each have their advantages and disadvantages. Regardless of whether an internal or external evaluator is used, professional stakes and allegiances ought to be acknowledged and understood as factors that can affect the decision making.

Planners and evaluators from within the org- anization are susceptible to biases, consciously or not, in favor of the program or some aspect of the program, particularly if their involvement can positively affect their work. On the positive side, internal planners and evaluators are more likely to have insider knowledge of organizational factors that can be utilized or may have a positive effect on the delivery and success of the health program. Internal evaluators may experience divided loyalties, such as between the program and their job, between the program staff members and other staff, or between the proposed program or evaluation and their view of what would be better.

A source of internal evaluators can be members of quality improvement teams, par- ticularly if they have received any training in program development or evaluation as they relate to quality improvement. The use of total quality management (TQM), continuous quality improvement (CQI), and other quality improvement methodologies by healthcare

 

 

Planning and Evaluation Cycle 11

(FIGURE 1-1) and that the activities occur more or less in stages or sets of activities. The stages are cyclical to the extent that the end of one program or stage flows almost seamlessly into the next program or planning activity. The activities are interdependent to the extent that the learning, insights, and ideas that result at one stage are likely to influence the available information and thus the decision making and actions of another stage. Interdependence of activities and stages ideally result from information and data feedback loops that connect the stages.

Naturally, not all of the possible interactions among program planning, implementation, and evaluation are shown in Figure 1-1. In reality, the cyclical or interactive nature of health pro- gram planning and evaluation exists in varying degrees. In the ideal, interactions, feedback loops, and reiterations of process would be reflected throughout this text. For the sake of clarity, however, the cycle is presented in a linear fashion in the text, with steps and sequences covered in an orderly fashion across the progression of chapters. This pedagogical approach belies the true messiness of health program planning and program evaluation. Because the planning and evaluation cycle is susceptible to and affected by external influences, to be successful as a program planner or evaluator requires a substantial degree of flexibility and creativity in recovering from these influences.

The cycle begins with a trigger event, such as awareness of a health problem; a periodic strategic planning effort; a process required by a stakeholder, such as a 5-year strategic planning process or a  grant renewal; or newly available funds for a health program. An indirect trigger for planning could be information generated from an evaluation that reveals either the failure of a health program, extraordinary success of the program, or the need for additional programs. The trigger might also be a news media exposé or legal action. For those seeking to initiate the planning process, getting the attention of influential individuals requires having access to them, packaging the message about the need for planning in ways that are immediately attractive,

that very likely involves others. For example, as the evaluation takes on a sociopolitical process, the evaluators become mediators and change agents. If the evaluation is a learning–teaching process, evaluators become both teacher and student of the stakeholders. To the extent that the evaluation is a process that creates a new reality for stakeholders, program staff members, and program participants, evaluators are reality shapers. Sometimes the evaluation may have an unpredictable outcome; at such times, evaluators are human instruments that gauge what is occurring and analyze events. Ideally, evaluations are a collaborative process, and evaluators act as collaborators with the stake- holders, program staff members, and program participants. If the evaluation takes the form of a case study, the evaluators may become illustrators, historians, and storytellers.

These are but a few examples of how the roles of the professional program evaluator evolve and emerge from the situation at hand. The individual’s role in the planning and evaluation activities may not be clear at the time that the project is started. Roles will develop and evolve as the planning and evaluation activities progress.

▸ Planning and Evaluation Cycle

Although planning and evaluation are commonly described in a linear sequential manner, they actually constitute a cyclical process. In this section, the cycle is described along with an emphasis on factors that enhance and detract from that process being effective.

Interdependent and Cyclic Nature of Planning and Evaluation A major premise running through the current thinking about programs and evaluation is that the activities constituting program planning and program evaluation are cyclical and interdependent

 

 

12 Chapter 1 Context of Health Program Development and Evaluation

and their solutions are prioritized. The planning phase includes developing the program theory, which explicates the connection between what is done and the intended effects of the program. Another component of the planning phase includes assessment of organizational and infrastructure resources for implementing the program, such as garnering resources to implement and sustain the program. Yet another major component of program planning is setting goals and objectives that are derived from the program theory.

After the resources necessary to implement the program have been secured and the activities that make up the program intervention have been explicated, the program can be implemented. The logistics of implementation include marketing

and demonstrating the salience of the issue. Thus, to get a specific health problem or issue “on the table,” activists can use the salient events to get the attention of influential individuals. The impor- tance of having a salient trigger event is to serve as a reminder that key individuals mentally sort through and choose among competing attention getters. This trigger event or situation leads to the collection of data about the health problem, the characteristics of the people affected, and their perceptions of the health problem. These data, along with additional data on available resources, constitute a community needs and assets assessment.

Based on the data from the needs assess- ment, program development begins. Problems

FIGURE 1-1 The Planning and Evaluation Cycle

Health program planning

Priorities established

Health status changes

Findings from the evaluation

Intervention effect

Findings from the evaluation

Program implementation of process and effect

theories

Participant/recipient health outcome and

impact

Assessment of community needs

and assets

Program and evaluation planning

Process theory and effect

theory delineated

Evaluation design and methodology

Statement of the health problems

Effects evaluation

Process evaluation

 

 

Planning and Evaluation Cycle 13

of an evaluation depends on the extent to which questions that need to be answered are, in fact, answered. Naturally, different stakeholder groups that are likely to use evaluation findings will be concerned with different questions.

Funding organizations, whether federal agencies or private foundations, constitute one stakeholder group. Funders may use process evaluations for program accountability and effect evaluations for determining the success of broad initiatives and individual program effec- tiveness. Project directors and managers, another stakeholder group, use both process and effect evaluation findings as a basis for seeking further funding as well as for making improvements to the health program. The program staff members, another stakeholder group, are likely to use both the process and the effect evaluation as a vali- dation of their efforts and as a justification for their feelings about their success with program participants or recipients. Scholars and health professionals constitute another stakeholder group that accesses the findings of effect evaluations through the professional literature. Members of this group are likely to use effect evaluations as the basis for generating new theories about what is effective in addressing a particular health problem and why it is effective.

Policy makers are yet another stakeholder group that uses both published literature and final program reports regarding process and effect evaluation findings when formulating health policy and making decisions about program resource allocation. Community action groups, community members, and program participants and recipients form another group of stake- holders. This stakeholder group is most likely to advocate for a community health assessment and to use process evaluation results as a basis for seeking additional resources or to hold the program accountable.

Program Life Cycle Feedback loops contribute to the overall de- velopment and evolution of a health program, giving it a life cycle. In the early stages of an idea

the program to the target audience, training and managing program personnel, and delivering or providing the intervention as planned. During implementation of the program, it is critical to conduct an evaluation of the extent to which the program is provided as planned; this is the process evaluation. The data and findings from the process evaluation are key feedback items in the planning and evaluation cycle, and they can and ought to lead to revisions in the program delivery.

Ultimately, the health program ought to have an effect on the health of the individual program participants or on the recipients of the program intervention if provided to the community or a population. The evaluation can be an outcome evaluation of immediate and closely causally linked programmatic effects or an impact evaluation of more temporally and causally distal program- matic effects. Both types of evaluations provide information to the health program planners for use in subsequent program planning. Evaluation of the effects of the program provides data and information that can be used to alter the program intervention. These findings can also be used in subsequent assessments of the need for future or other health programs.

The model used throughout this text as a framework (Figure 1-1) generically represents the steps and processes. It is one of many pos- sible ways to characterize the planning and evaluation cycle. As a generic representation, the planning and evaluation cycle model used in this text includes the essential elements, but it cannot provide detailed instructions on the “whens” and “hows” because each situation will be slightly different.

Using Evaluation Results as the Cyclical Link Before embarking on either a process or an effect evaluation, it is important to consider who will use the results because, in being used, evaluation results are perpetuating the program planning and evaluation cycle. The usefulness

 

 

14 Chapter 1 Context of Health Program Development and Evaluation

dying patients (Kaur, 2000). As its advocates saw the need for reimbursement for the service, they began systematically to control what was done and who was “admitted” to hospice. Once evaluations of these hospice programs began to yield findings that demonstrated their positive benefits, they became the model for more wide- spread programs that were implemented in local agencies or by new hospice organizations. As hospice programs became accepted as a standard of care for the dying, the hospice programs became standard, institutionalized services for the organization. Today the availability and use of hospice services for terminally ill patients are accepted as standard practice, and most larger healthcare organizations or systems have established a hospice program. The evolution of hospice is but one example of how an idea for a “better” or “needed” pro- gram can gradually become widely available as routine care.

▸ The Fuzzy Aspects of Planning

We like to think of planning as a rational, linear process, with few ambiguities and only the rare dispute. Unfortunately, this is not the reality of health program planning. Many paradoxes inherently exist in planning as well as implicit assumptions, ambiguities, and the potential for conflict. In addition, it is important to be familiar with the key ethical principles that underlie the decision making that is part of planning.

Paradoxes Several paradoxes pervade health planning (Porter, 2011), which may or may not be resolv- able. Those involved can hold assumptions about planning that complicate the act of planning, whether for health systems or programs. Being aware of the paradoxes and assumptions can, however, help program planners understand possible sources of frustration.

for a health program, the program may begin as a pilot. At this stage, program development occurs and involves use of literature and needs assessment data (Scheirer, 2012). The program may not rely on any existing format or theory, so simple trial and error is used to determine whether it is feasible as a program. It is likely to be small and somewhat experimental because a similar type of program has not been developed or previously attempted. As the program matures, it may evolve into a model program. A model program has interventions that are formalized, or explicit, with protocols that standardize the intervention, and the program is delivered under conditions that are controlled by the program staff members and developers. Model programs can be difficult to sustain over time because of the need to follow the protocols. Evaluations of programs at this stage focus on identifying and documenting the effects and efficacy of the pro- gram (Scheirer, 2014). Successful model programs become institutionalized within the organization as an ongoing part of the services provided. Suc- cessful programs can be institutionalized across a number of organizations in a community to gain wide acceptance as standard practice, with the establishment of an expectation that a “good” agency will provide the program. At this last stage, the health program has become institutionalized within health services. Evaluations tend to focus on quality and performance improvements, as well as sustainability. The last life cycle stage is the dissemination and replication of programs shown to be effective.

Regardless of the stage in a program’s life cycle, the major planning and evaluation stages of community assessment and evaluation are carried out. The precise nature and purpose of each activity vary slightly as the program matures. Being aware of the stage of the program being implemented can help tailor the community assessment and evaluation.

This life cycle of a health program is reflected in the evolution of hospice care. Hospice—care for the dying in a home and family setting—began in London in 1967 as a grassroots service that entailed trial and error about how to manage

 

 

The Fuzzy Aspects of Planning 15

and communitywide mandates, does not take into account cultural trends or preferences.

Another paradox is that those in need ideally, but rarely, trigger the planning of health programs; rather, health professionals initiate the process. This paradox addresses the issue of who knows best and who has the best ideas for how to resolve the “real” problem. The perspective held by health professionals often does not reflect broader, more common health social values (Reinke & Hall, 1988), including the values possessed by those individuals with the “problem.” Because persons in need of health programs are most likely to know what will work for them, community and stakeholder participation becomes not just crucial but, in many instances, is actually mandated by funding agencies. This paradox also calls into question the role of health professionals in developing health programs. Their normative perspective and scientific knowledge need to be weighed against individuals’ choices that may have caused the health problem.

A corollary to the paradox dealing with the sources of the best ideas is the notion that poli- ticians tend to prefer immediate and permanent cures, whereas health planners prefer long-term, strategic, and less visible interventions (Reinke & Hall, 1988). Generally, people want to be cured of existing problems rather than to think probabi- listically about preventing problems that may or may not occur in the future. As a consequence, the prevention and long-term solutions that seem obvious to public health practitioners can conflict with the solutions identified by those with the “problem.”

One reason that the best solutions might come from those with the problem is that health professionals can be perceived as blaming those with the health problem for their problem. Blum (1981), for example, identified the practice of “blaming the victim” as a threat to effective planning. When a woman who experiences domestic violence is said to be “asking for it,” the victim is being blamed. During the planning process, blaming the victim can be implicitly and rather subtly manifested in group settings

One paradox is that planning is shaped by the same forces that created the problems that planning is supposed to correct. Put simply, the healthcare, sociopolitical, and cultural factors that contributed to the health problem or condition are very likely to be same factors that affect the health planning process. The interwoven relationship of health and other aspects of life affects health planning. For example, housing, employment, and social justice affect many health conditions that stimulate planning. This paradox implies that health planning itself is also affected by housing, employment, and social justice.

Another paradox is that the “good” of indi- viduals and society experiencing the prosperity associated with health and well-being is “bad” to the extent that this prosperity also produces ill health. Prosperity in our modern world has its own associated health risks, such as higher cholesterol levels, increased stress, increased risk of cardiovascular disease, and increased levels of environmental pollutants. Also, as one group prospers, other groups often become dispropor- tionately worse off. So, to the extent that health program planning promotes the prosperity of a society or a group of individuals, health issues for others will arise that require health program planning.

A third paradox is that what may be eas- ier and more effective may be less acceptable. A good example of this paradox stems from decisions about active and passive protective interventions. Active protection and passive protection are both approaches to risk reduc- tion and health promotion. Active protection requires that individuals actively participate in reducing their risks—for example, through diet changes or the use of motorcycle helmets. Passive protection occurs when individuals are protected by virtue of some factor other than their behavior—for example, water fluoridation and mandates for smoke-free workplaces. For many health programs, passive protection in the form of health policy or health regulations may be more effective and efficient. However, ethical and political issues can arise when the emphasis on passive protection, through laws

 

 

16 Chapter 1 Context of Health Program Development and Evaluation

health problem. The assumption of possibilities further presumes that the resources available, whether human or otherwise, are sufficient for the task and are suitable to address the health problem. The assumption of adequate capacity and knowledge is actually tested through the process of planning.

A companion assumption is that planning leads to the allocation of resources needed to address the health problem. This assumption is challenged by the reality that four groups of stakeholders have interests in the decision making regarding health resources (Sloan & Conover, 1996) and each group exists in all pro- gram planning. Those with the health problem and who are members of the target audience for the health program are one group. Another group of stakeholders is health payers, such as insurance companies and local, federal, and philanthropic funding agencies. The third group is individual healthcare providers and healthcare organizations and networks. Last, the general public is a stakeholder group because it is affected by how resources are allocated for health programs. This list of stakeholder groups highlights the variety of motives each group has for being involved in health program planning, such as personal gain, visibility for an organization, or acquisition of resources associated with the program.

Another assumption about those involved is that they share similar views on how to plan health programs. During the planning process, their points of view and cultural perspectives will likely come into contrast. Hoch (1994) suggested that planners need to know what is relevant and important for the problem at hand. Planners can believe in one set of community purposes and values yet still recognize the validity and merit of competing purposes. He argues that effective planning requires tolerance, freedom, and fairness and that technical and political values are two bases from which to give planning advice. In other words, stakeholders involved in the planning process need to be guided into appreciating and perhaps applying a variety of perspectives about planning.

through interpretation of data about needs, thereby affecting decisions related to those needs. Having the attitude that “the victim is to blame” can also create conflict and tension among those involved in the planning process, especially if the “victims” are included as stakeholders. The activities for which the victim is being blamed need to be reframed in terms of the causes of those activities or behaviors.

Yet another paradox is the fact that planning is intended to be successful; no one plans to fail. Because of the bias throughout the program planning cycle in favor of succeeding, unantic- ipated consequences may not be investigated or recognized. The unanticipated consequences of one action can lead to the need for other health decisions that were in themselves unintended (Patrick & Erickson, 1993). To overcome this paradox, brainstorming and thinking creatively at key points in the planning process ought to be fostered and appreciated.

A final paradox of planning, not included on Reinke and Hall’s (1988) list, is that most planning is for making changes, not for creating stability. Yet once a change has been achieved, whether in an individual’s health status or a community’s rates of health problems, the achievement needs to be maintained. Many health programs and health improvement initiatives are designed to be accomplished within a limited time frame, with little or no attention to what happens af- ter the program is completed. To address this paradox requires that planning anticipate the conclusion of a health program and include a plan for sustaining the gains achieved.

Assumptions Assumptions also influence the effectiveness of planning. The first and primary assumption underlying all planning processes is that a solu- tion, remedy, or appropriate intervention can be identified or developed and provided. Without this assumption, planning would be pointless. It is fundamentally an optimistic assumption about the capacity of the planners, the stakehold- ers, and the state of the science to address the

 

 

The Fuzzy Aspects of Planning 17

Uncertainty is the unknown likelihood of a possible outcome. Rice, O’Connor, and Pierantozzi (2008) have identified four types of uncertainty: types and amount of resources, technological, market receptivity to the product, and organizational. Each of these uncertainties is present in planning health programs. Ambiguity is doubt about a course of action stemming from awareness that known and unknown factors exist that can decrease the possibility of certainty. In this sense, ambiguity results in uncertainty. Both uncertainty and ambiguity pervade the planning process because it is impossible to know and estimate the effect of all relevant factors—from all possible causes of the health problem, to all possible health effects from program interventions, to all possible acts and intentions of individuals. A rational approach to planning presumes that all relevant factors can be completely accounted for by anticipating the effect of a program, but our experiences as humans tell us otherwise.

Ambiguity is the characteristic of not having a clear or single meaning. Change, or the possibility of change, is a possible source of ambiguity. When ambiguity is ignored, the resulting differences in interpretation can lead to confusion and conflict among stakeholders and planners, among planners and those with the health problem, and among those with var- ious health problems vying for resources. The conflict, whether subtle and friendly or openly hostile, detracts from the planning process by requiring time and personnel resources to address and resolve the conflict. Nonetheless, openly and constructively addressing the am- biguity and any associated conflict can lead to innovations in the program.

Risk is the perceived possibility or uncertain probability of an adverse outcome in a given situation. Health planners need to be aware of the community’s perception and interpretation of probabilities as they relate to health and illness. Risk is not just about taking chances (e.g., bungee jumping or having unprotected sex) but is also about uncertainty and ambiguity (as is the case with estimates of cure rates and projections about future health conditions).

Each stakeholder group assumes that there are limited resources to be allocated for addressing the health problem and is receptive or respon- sive to a different set of strategies for allocating health resources. The resulting conflicts among the stakeholders for the limited resources apply whether they are allocating resources across the healthcare system or among programs for specific health problems. Limited  resources, whether real or not, raise ethical questions of what to do when possible gains from needed health programs or policies are likely to be small, especially when the health program addresses serious health problems.

It is interesting that, the assumption of limited resources parallels the paradox that planning occurs around what is limited rather than what is abundant. Rarely is there a discussion of the abundant or unlimited resources available for health planning. Particularly in the United States, we have an amazing abundance of volunteer hours and interest and of advocacy groups and energy, and recently retired equipment that may be appropriate in some situations. Such resources, while not glamorous or constituting a substantial entry on a balance sheet, deserve to be acknowledged in the planning process.

Another assumption about the planning process is that it occurs in an orderly fashion and that a rational approach is best. To understand the implications of this assumption, one must first acknowledge that four key elements are inherent in planning: uncertainty, ambiguity, risk, and control. The presence of each of these elements contradicts the assumption of a rational approach, and each generates its own paradoxes.

Uncertainty, Ambiguity, Risk, and Control Despite the orderly approach implied by use of the term planning, this process is affected by the limits of both scientific rationality and the usefulness of data to cope with the uncertainties, ambiguities, and risks being addressed by the planning process (see TABLE 1-3).

 

 

18 Chapter 1 Context of Health Program Development and Evaluation

the target audience provides planners with a basis from which to be flexible and speculative.

Control, as in being in charge of or man- aging, is a natural reaction to the presence of ambiguity, conflict, and risk. It can take the form of directing attention and allocating resources or of exerting dominance over others. Control

Risk is pervasive and inherent throughout the planning process in terms of deciding who to involve and how, which planning approach to use, which intervention to use, and in estimating which health problem deserves attention. The importance of understanding risk as an element both of the program planning process and of

TABLE 1-3 Fuzzy Aspects Throughout the Planning and Evaluation Cycle

Stages in the Planning and Evaluation Cycle

Community Assessment Planning Implementation

Effect Evaluation

Uncertainty Unknown likelihood of finding key health determinants

Unknown likelihood of selecting an effective intervention, unknown likelihood of the intervention being effective

Unknown likelihood of the intervention being provided as designed and planned

Unknown likelihood of intervention being effective

Ambiguity Unclear about who is being assessed or why

Unclear about the process, who is leading planning process, or what it is intended to accomplish

Unclear about the boundaries of the program, who ought to participate, or who ought to deliver the program

Unclear about meaning of the evaluation results

Risk Unknown possibility of the assessment causing harm

Unknown possibility of planning touching on politically sensitive issues

Unknown possibility of the intervention having an adverse effect on participants

Unknown possibility of adverse effect from the evaluation design, or from misinterpretation of the findings

Control Directing the process of gathering and interpreting data about the health problem

Directing the decisions about the program

Directing the manner in which the program is provided

Directing the process of data collection, analysis and interpretation

 

 

Introduction to the Types of Evaluation 19

the overall program theory developed during the planning stage. The process theory delineates the logistical activities, resources, and interventions needed to achieve the health change in program participants or recipients. Information from the process evaluation is used to plan, revise, or improve the program.

The third type of evaluation seeks to determine the effect of the program—in other words, to demonstrate or identify the program’s effect on those who participated in the program. Effect evaluations answer a key question: Did the program make a difference? The effect theory component of the program theory is used as the basis for designing this evaluation. Evaluators seek to use the most rigorous and robust designs, methods, and statistics possible and feasible when conducting an effect evaluation. Findings from effect evaluations are used to revise the program and may be used in subsequent initial program planning activities. Effect evaluations may be referred to as outcome or impact evaluations, terms which seem to be used interchangeably in the literature. For clarity, outcome evaluations focus on the more immediate effects of the program, whereas impact evaluations may have a more long-term focus. Program planners and evaluators must be vigilant with regard to how they and others are using terms and should clarify meanings and address misconceptions or misunderstandings.

A fourth type of evaluation focuses on efficiency and the costs associated with the pro- gram. Cost evaluations encompass a variety of more specific cost-related evaluations—namely, cost-effectiveness evaluations, cost–benefit evaluations, and cost–utility evaluations. For the most part, cost evaluations are done by re- searchers because cost–benefit and cost–utility evaluations, in particular, require expertise in economics. Nonetheless, small-scale and simpli- fied cost-effectiveness evaluations can be done if good cost accounting has been maintained by the program and a more sophisticated outcome or impact evaluation has been conducted. The similarities and differences among these three types of cost studies are reviewed in greater detail

remains a key element of management. In other words, addressing the ambiguity, uncertainty, and risk that might have been the trigger for the planning process requires less—not more— control. Those who preside over and influence the planning process are often thought of as having control over solutions to the health problem or condition. They do not. Instead, effective guid- ance of the planning process limits the amount of control exerted by any one stakeholder and addresses the anxiety that often accompanies the lack of control.

▸ Introduction to the Types of Evaluation

Several major types of activities are classified as evaluations. Each type of activity requires a specific focus, purpose, and set of skills. The types of evaluations are introduced here as an overview of the field of planning and evaluation.

Community needs assessment (also known as community health assessment) is a type of eval- uation that is performed to collect data about the health problems of a particular group. The data collected for this purpose are then used to tailor the health program to the needs and distinctive characteristics of that group. A community needs assessment is a major component of program planning because it is, done at an early stage in the program planning and evaluation cycle. In addition, the regular completion of community assessments may be required. For example, many states do 5-year planning of programs based on state needs assessments.

Another type of evaluation begins at the same time that the program starts. Process evalu- ations focus on the degree to which the program has been implemented as planned and on the quality of the program implementation. Process evaluations are known by a variety of terms, such as monitoring evaluations, depending on their focus and characteristics. The underlying framework for designing a process evaluation comes from the process theory component of

 

 

20 Chapter 1 Context of Health Program Development and Evaluation

are usually contrasted with formative evaluations. The term formative evaluation is used to refer to program assessments that are performed early in the implementation of the program and used to make changes to the program. Formative evaluations might include elements of process evaluation and preliminary effect evaluations.

Mandated and Voluntary Evaluations Evaluations are not spontaneous events. Rather, they are either mandated or voluntary. A mandate to evaluate a program is always linked in some way to the funding agencies, whether a govern- mental body or a foundation. If an evaluation is mandated, then the contract for receiving the program funding will include language specifying the parameters and time line for the mandated evaluation. The mandate for an evaluation may specify whether the evaluation will be done by project staff members or external evaluators, or both. For example, the State Child Health Insurance Program (SCHIP), created in 1998, is a federally funded and mandated program to expand insurance coverage to children just above the federal poverty level. Congress has the authority to mandate evaluations of federal programs and did just that with the SCHIP. Mandated evaluations of SCHIP include an overall evaluation study by Wooldridge and associates from the Urban Institute (2003), and an evaluation specifically focused on outcomes for children with special healthcare needs (Zickafoose, Smith, & Dye, 2015).

Other evaluations may be linked to ac- creditation that is required for reimbursement of services provided, making them de facto mandated evaluations. For example, to receive accreditation from the Joint Commission, a health services organization must collect data over time on patient outcomes. These data are then used to develop ongoing quality improvement efforts. A similar process exists for mental health agencies. The Commission on Accreditation of Rehabilitation Facilities (CARF) requires that

in the text so that program planners can be, at minimum, savvy consumers of published reports of cost evaluations. Because cost evaluations are performed late in the planning and evaluation cycle, their results are not likely to be available in time to make program improvements or re- visions. Instead, such evaluations are generally used during subsequent planning stages to gather information for prioritizing program options.

Comprehensive evaluations, the fifth type of evaluation, involve analyzing needs assessment data, process evaluation data, effect evaluation data, and cost evaluation data as a set of data. Given the resources needed to integrate analysis of various types of data to draw conclusions about the effectiveness and efficiency of the program, comprehensive evaluations are relatively uncommon. A sixth type of evaluation is a meta-evaluation. A meta-evaluation is done by combining the findings from previous outcome evaluations of various programs for the same health problem. The pur- pose of a meta-evaluation is to gain insights into which of the various programmatic approaches has had the most effect and to determine the maximum effect that a particular programmatic approach has had on the health problem. This type of evaluation relies on the availability of existing information about evaluations and on the use of a specific set of methodological and statistical procedures. For these reasons, meta-evaluations are less likely to be done by program personnel; instead, they are generally carried out by evaluation researchers. Meta-evaluations that are published are extremely useful in program planning because they indicate which programmatic interventions are more likely to succeed in having an effect on the participants. Published meta-evaluations can also be valuable in influencing health policy and health funding decisions.

Summative evaluations, in the strictest sense, are done at the conclusion of a program to provide a conclusive statement regarding pro- gram effects. Unfortunately, the term summative evaluation is sometimes used to refer to either an outcome or impact evaluation, adding even more confusion to the evaluation terminology and vernacular language. Summative evaluations

 

 

The Public Health Pyramid 21

as techniques for designing and conducting both program process and effect evaluations have improved, and the expectation is that even mandated evaluations will be useful in some way. Nonetheless, it remains critical to consider how to conduct evaluations legitimately, rigorously, inexpensively, and fairly. In addition, if the AEA standards of utility, feasibility, propriety, and accuracy cannot be met, it is not wise to conduct an evaluation (Patton, 2008).

Interests and the degree of influence held by stakeholders can change. Such changes affect not only how the evaluation is conceptualized but also whether evaluation findings are used. In addition, the priorities and responsibilities of the organizations and agencies providing the program can change during the course of delivering the program, which can then lead to changes in the program implementation that have not been taken into account by the evaluation. For example, if withdrawal of resources leads to a shortened or streamlined evaluation, subsequent findings may indicate a failure of the program intervention. However, it will remain unclear whether the apparently ineffective intervention was due to the design of the program or the design of the evaluation. In addition, unanticipated problems in delivering the program interventions and the evaluation will always exist. Even rigorously designed evaluations face challenges in the real world stemming from staff turnover, potential participants’ noninvolvement in the program, bad weather, or any of a host of other factors that might hamper achieving the original evaluation design. Stakeholders will need to understand that the evaluator attempted to address challenges as they arose if they are to have confidence in the evaluation findings.

▸ The Public Health Pyramid

Pyramids tend to be easy to understand and work well to capture tiered concepts. For these reasons, pyramids have been used to depict the

provider organizations conduct a self-evaluation as an early step in the accreditation process. These accreditation-related evaluations apply predominantly to direct care providers rather than to specific programs.

Completely voluntary evaluations are initi- ated, planned, and completed by the project staff members in an effort to make improvements. However, given the relatively low reward from, and cost associated with, doing an evaluation when it is not required, these evaluations are likely to be small with low scientific rigor. Pro- grams that engage voluntarily in evaluations may have good intentions, but they often lack the skills and knowledge required to conduct an appropriate evaluation.

When Not to Evaluate Situations and circumstances that are not amenable to conducting an evaluation do exist, despite a request or the requirement for having an evaluation. Specifically, it is not advisable to attempt an evaluation under the following four circumstances: when there are no questions about the program, when the program has no clear direction, when stakeholders cannot agree on the program objectives, and when there is not enough money to conduct a sound evaluation (Patton, 2008). In addition to these situations, Weiss (1972) recognized that sometimes eval- uations are requested and conducted for less than legitimate purposes, namely, to postpone program or policy decisions, thereby avoiding the responsibility of making the program or policy decision; to make a program look good as a public relations effort; or to fulfill program grant requirements. As these lists suggest, those engaged in program planning and evaluation need to be purposeful in what is done and should be aware that external forces can influence the planning and evaluation processes.

Since Weiss made her observation in 1972, funders have begun to require program process and effect evaluations, and conducting these evaluations to meet that requirement is consid- ered quite legitimate. This change has occurred

 

 

22 Chapter 1 Context of Health Program Development and Evaluation

mental health drop-in centers, hospice programs, financial assistance programs that provide trans- portation to medical care, community-based case management for patients with acquired immune deficiency syndrome (AIDS), low-income hous- ing, nutrition education programs provided by schools, and workplace child care centers. As this list of programs demonstrates, the services at this level may directly or indirectly contribute to the health of individuals, families, and communities and are provided to aggregates. Enabling services can also be thought of as addressing some of the consequences of social determinants of health.

The next, more encompassing level of the public health pyramid is population-based services. At the population level of the pyramid, services are delivered to an entire population, such as all persons residing in a city, state, or country. Examples of population services include immu- nization programs for all children in a county, newborn screening for all infants born in a state, food safety inspections carried out under the auspices of state regulations, workplace safety programs, nutrition labeling on food, and the Medicaid program for pregnant women whose incomes fall below the federal poverty guidelines. As this list reflects, the distinction between an aggregate and a population can be blurry. Programs at this level typically are intended to reach an entire population, sometimes without the conscious involvement of individuals. In this sense, individuals receive a population-based health program, such as water fluoridation, rather than participating in the program, as they would in a smoking-cessation class. Inter- ventions and programs aimed at changing the socioeconomic context within which populations live would be included at this population level of the pyramid. Such programs are directed at changing one or more social determinants of health. Population-level programs contribute to the health of individuals and, cumulatively, to the health status of the population.

Supporting the pyramid at its base is the infrastructure of the healthcare system and the public health system. The health services at the other pyramid levels would not be possible

tiered nature of primary healthcare, secondary healthcare, and tertiary healthcare services (U.S. Public Health Service, 1994), the inverse relationship of effort needed and health impact of different interventions (Frieden, 2010), and nutrition recommendations (Gil, Ruiz-Lopez, Fernandez-Gonzalez, & de Victoria, 2014).

The public health pyramid is divided into four sections (FIGURE 1-2). The top, or the first, section of the pyramid contains direct healthcare services, such as medical care, psychological counseling, hospital care, and pharmacy services. At this level of the pyramid, programs are delivered to individ- uals, whether patients, clients, or even students. Generally, programs at the direct services level have a direct, and often relatively immediate, effect on individual participants in the health program. Direct services of these types appear at the tip of the pyramid to reflect that, overall, the smallest proportion of a population receives them. These interventions, according to the Health Impact Pyramid (Frieden, 2010), require considerable effort, with minimal population effects.

At the second level of the pyramid are en- abling services, which are those health and social services that support or enhance the health of aggregates. Aggregates are used to distinguish between individuals and populations; they are groups of individuals who share a defining char- acteristic, such as mental illness or a terminal disease. Examples of enabling services include

FIGURE 1-2 The Public Health Pyramid

Direct healthcare services

Enabling services

Population-based services

Infrastructure services

 

 

The Public Health Pyramid 23

of the program with meeting the needs of the broadest number of people with a given need. Reaching the same number of persons with a direct services program as with a population services program poses additional expense and logistic challenges.

The pyramid also serves as a reminder that stakeholder alignments and allegiances may be specific to a level of the pyramid. For example, a school health program (an enabling-level program) has a different set of constituents and concerned stakeholders than a highway safety program (a population-level program). The savvy program planner considers not only the potential program participants at each level of the pyramid but also the stakeholders who are likely to make themselves known during the planning process.

The public health pyramid has particular relevance for public health agencies concerned with addressing the three core functions of public health (Institute of Medicine, 1988): assessment, assurance, and policy. These core functions are evident, in varying forms, at each level of the pyramid. Similarly, the pyramid can be applied to the strategic plans of organizations in the private healthcare sector. For optimal health program planning, each health program being developed or implemented ought to be considered in terms of its relationship to services, programs, and health needs at other levels of the pyramid. For all these reasons, the public health pyramid is used throughout this text as a framework for summarizing specific issues and applications of chapter content to each level of the pyramid and to identify and discuss potential or real issues related to the topic of the chapter.

The Public Health Pyramid as an Ecological Model Individual behavior and health are now under- stood to be influenced by the social and physical environment of individuals. This recognition is reflected in the growing use of the ecological approach to health services and public health

unless there were skilled, knowledgeable health professionals; laws and regulations pertinent to the health of the people; quality assurance and improvement programs; leadership and managerial oversight; health planning and program evaluation; information systems; and technological resources. The planning and evaluation of health programs at the direct, enabling, and population services levels is itself a component of the infrastructure; these are infrastructure activities. In addition, planning programs to address problems of the infrastructure, as well as to evaluate the infra- structure itself, are needed to keep the health and public health system infrastructure strong, stable, and supportive of the myriad of health programs.

Use of the Public Health Pyramid in Program Planning and Evaluation Health programs exist across the pyramid levels, and evaluations of these programs are needed. However, at each level of the pyramid, certain issues unique to that level must be addressed in developing health programs. Accordingly, the types of health professionals and the types of expertise needed vary by pyramid level, reinforcing the need to match program, participants, and pro- viders appropriately. Similarly, each level of the pyramid is characterized by unique challenges for evaluating programs. For this reason, the public health pyramid, as a framework, helps illuminate those differences, issues, and challenges, as well as to reinforces that health programs are needed across the pyramid levels if the Healthy People 2020 goals and objectives are to be achieved.

In a more general sense, the public health pyramid provides reminders that various aggre- gates of potential audiences exist for any health problem and program and that health programs are needed across the pyramid. Depending on the health discipline and the environment in which the planning is being done, direct service programs may be the natural or only inclination. The public health pyramid, however, provides a framework for balancing the level

 

 

24 Chapter 1 Context of Health Program Development and Evaluation

Because it distinguishes and recognizes the importance of enabling and population services, the public health pyramid can be integrated with an ecological view of health and health problems. If one were to look down on the pyramid from above, the levels would appear as concentric circles (FIGURE 1-3)—direct services for individ- uals nested within enabling services for families, aggregates, and neighborhoods, which are in turn nested within population services for all residents of cities, states, or countries. This is similar to individuals being nested within the enabling environment of their family, workplace setting, or neighborhood, all of which are nested within

programs. The ecological approach, which stems from systems theory applied to individuals and families (Bronfenbrenner, 1970, 1989), postulates that individuals can be influenced by factors in their immediate social and physical environment. This perspective has been expanded into the social determinants perspective in public health, which has wide acceptance (Frieden, 2010). The individ- ual is viewed as a member of an intimate social network, usually a family, which is a member of a larger social network, such as a neighborhood or community. The way in which individuals are nested within these social networks has conse- quences for the health of the individual.

FIGURE 1-3 The Pyramid as an Ecological Model

S cience, theory, practice, programs, planning, structu

re, p olic

ies , re

so ur

ce

s, e

va lu

at io

n

Pu blic

hea lth an

d private health infrastructure

Populations

F am

ilie

s, a

gg reg

ates, neighborhoods, com

m unities

Individuals

 

 

Across the Pyramid 25

or patients—that is, on developing programs that are provided to those individuals and on assessing the extent to which those programs make a difference in the health of the individ- uals who receive the health program. Health is defined in individual terms, and program effects are measured as individual changes. From this level of the public health pyramid, community is most likely viewed as the context affecting individual health.

At the enabling services level, health program planning and evaluation focus on the needs of aggregates of individuals and on the services that the aggregate needs to maintain health or make health improvements. Enabling services are often social, educational, or human services that have an indirect effect on health, thus warranting their inclusion in planning health programs. Health continues to be defined and measured as an individual characteristic to the extent that enabling services are provided to individual members of the aggregate. However, program planning and evaluation focus not on individuals but rather on the aggregate as a unit. At this level of the pyramid, community can be either the aggregate that is targeted for a health program or the context in which the aggregate functions and lives. How community is viewed depends on the health problem being addressed.

At the population-based services level, health program planning and evaluation focus on the needs of all members of a population. At this level of the pyramid, health programs are, at a minimum, population driven, meaning that data collected in regard to the health of the population drive the decisions about the health program. This approach results in programs that are population focused and, ideally (but not necessarily), population based. It is worth noting that population-focused programs tend to have a health promotion or health maintenance focus rather than a focus on treatment of illnesses. At a population level, health is defined in terms of population statistics, such as mortality and morbidity rates. In this regard, the Healthy People 2020 objectives (TABLE 1-4) are predominantly at

the population environment of factors such as social norms and economic and political envi- ronments. The infrastructure of the healthcare system and public health system is the foundation and supporting environment for promoting health and preventing illnesses and diseases.

The end of the chapter presents a summary of challenges or issues related to applying the chapter content to each level of the pyramid. This feature reinforces the message that each level of the pyramid has value and importance to health program planning and evaluation. In addition, certain unique challenges are specific to each level of the pyramid. The chapter summary by levels offers an opportunity to acknowledge and address the issues related to the levels.

▸ The Town of Layetteville in Bowe County

As an aid to understanding and assimilating the content covered, examples from the literature are provided throughout this book. In addition, chapters include application of content to a hy- pothetical town (Layetteville) in an imaginary county (Bowe County). Based on a fictional community needs assessment, subsequent prior- itization leads to the identification of five health problems as foci for health program planning. These health problems are used throughout the text as opportunities to demonstrate application of the chapter content. Also, some discussion questions and activities use Layetteville and Bowe County as opportunities for the reader to practice applying the chapter content. While the town and county are fictitious, the health problems around which the program planning and evaluation occur are very real and relevant.

▸ Across the Pyramid At the direct services level, health program plan- ning and evaluation focus on individual clients

 

 

26 Chapter 1 Context of Health Program Development and Evaluation

health program identify which Healthy People 2020 objectives are being addressed. To the extent that health planners and evaluators are familiar with these objectives, they will be better able to design appropriate programs and then to argue in favor of the relevance of each of those programs. At the infrastructure level, health can be defined in terms of the individual workers in the healthcare sector (an aggregate). More to the point, because program planning and evaluation are infrastructure activities, it is actually at the infrastructure level that the decisions are made on the definition of health to be used in the program. Similarly, the way that community is viewed is determined at the infrastructure level.

the population level of the public health pyramid. Community is more likely to be the population targeted by the health program.

At the infrastructure level, health program planning and evaluation are infrastructure activities of both the public health system and the healthcare system. Infrastructure includes organizational management, acquisition of resources, and development of health policy. A significant document reflecting health policy is Healthy People 2020, which outlines the goals and objectives for the health of the people of the United States. These national objectives are considered when setting priorities and are used by many federal and nongovernmental funding agencies, which often require that a

TABLE 1-4 A Summary of the Healthy People 2020 Priority Areas

Scroll to Top