Evaluation and Organizational Learning: Past, Present ... - YouthREX [PDF]

Dec 2, 2015 - Past, Present, and Future. ROSALIE T. TORRES AND HALLIE PRESKILL ... 464-3670; E-mail: Rosalie_Torres@devs

28 downloads 17 Views 53KB Size

Recommend Stories


Dunachton, past and present
When you do things from your soul, you feel a river moving in you, a joy. Rumi

past, present and future
Don't count the days, make the days count. Muhammad Ali

Eggs Past and Present
Kindness, like a boomerang, always returns. Unknown

Past, present and future
You're not going to master the rest of your life in one day. Just relax. Master the day. Than just keep

Past, present and future
Respond to every call that excites your spirit. Rumi

Past, Present and Future
You can never cross the ocean unless you have the courage to lose sight of the shore. Andrè Gide

past, present, and why
Seek knowledge from cradle to the grave. Prophet Muhammad (Peace be upon him)

past, present and future
Do not seek to follow in the footsteps of the wise. Seek what they sought. Matsuo Basho

PdF IT and Organizational Learning
We can't help everyone, but everyone can help someone. Ronald Reagan

traits past, present, and future
Where there is ruin, there is hope for a treasure. Rumi

Idea Transcript


Evaluation and Organizational Learning: Past, Present, and Future ROSALIE T. TORRES AND HALLIE PRESKILL INTRODUCTION Our vision for the future of evaluation is grounded in a desire for evaluation to play an expanded and more productive role within organizations. This future emphasizes a learning approach to evaluation that is contextually-sensitive, ongoing, and supports dialog, reflection, and decision making at department and programmatic as well as organization-wide levels. Although we believe elements of this approach are applicable to many kinds of evaluations, we recognize that it may be difficult or less appropriate for large-scale, multisite policyoriented evaluation studies. In what follows we take a historical look at how interest in evaluation use and organizational learning has developed. Then we describe challenges to implementing a learning approach to evaluation, and how changes both within and outside the evaluation profession can help organizations use evaluation more effectively. HISTORICAL PERSPECTIVE ON AN ORGANIZATIONAL LEARNING APPROACH TO EVALUATION A long-term concern for evaluators and clients has been the ways and extent to which evaluation findings are used (Alkin, 1980; Daillak, 1982; Johnson, 1998; King, 1988; Leviton & Hughes, 1981; Patton, 1978, 1986, 1997; Preskill & Caracelli, 1997; Shulha & Cousins, 1997). Early evaluations, which were largely externally funded studies of large-scale educational programs, often resulted in their findings going unused. Unease about this situation led to research on various types of evaluation use (i.e., instrumental, conceptual, political) and the factors influencing them (e.g., timeliness, relevance, evaluator and user characteristics) (Alkin, Daillak, & White, 1979; Chelimsky, 1986; Cousins & Leithwood, 1986; Newman, Brown, & Rivers, 1983; Patton, 1978; Rich, 1977; Weiss, 1981). At the same time, the profession grew to include more and more evaluation conducted by internal evaluators (Mathison, 1991; Sonnichsen, 1999; Torres, Preskill, & Piontek, 1997). Rosalie T. Torres ● Director of Research and Evaluation, Developmental Studies Center, 2000 Embarcadero, Suite 305, Oakland, CA 94606 –5300; Tel: (510) 533-0213; Fax: (510) 464-3670; E-mail: [email protected]. American Journal of Evaluation, Vol. 22, No. 3, 2001, pp. 387–395. All rights of reproduction in any form reserved. ISSN: 1098-2140 Copyright © 2002 by American Evaluation Association.

387 Downloaded from aje.sagepub.com at UNIV TORONTO on December 2, 2015

388

AMERICAN JOURNAL OF EVALUATION, 22(3), 2001

While better understanding of issues surrounding evaluation use was helpful, it did not necessarily translate to significantly enhancing use. As a result, a large number of evaluation researchers turned to exploring the relationship between evaluators and users. An outgrowth of this work has been the development of collaborative and participatory approaches to evaluation (Brandon, 1998; Brunner & Guzman, 1989; Cousins & Earl, 1992, 1995; Cousins & Whitmore, 1998; Fetterman, 1996; Greene, 1987, 1988; O’Sullivan & O’Sullivan, 1998; Patton, 1997). Using these approaches involves including the stakeholders (potential users) in the evaluation’s design and data collection activities as much as possible (this often depends on the stakeholders’ level of interest and skill, as well as the evaluator’s comfort level). Stakeholder involvement in the evaluation’s design and implementation is intended to increase: (a) their buy-in to the evaluation, (b) their understanding of the evaluation process, and (c) ultimately, their use of the evaluation’s findings. Participatory approaches have been widely credited with increasing the relevance and use of both evaluation processes and findings in a variety of programs and organizations (see, for example, Brett, Hill-Mead, & Wu, 2000; Cousins, Donohue, & Bloom, 1996; Fetterman, 1994; Fitzpatrick, 1998; Huebner, 2000; Mueller, 1998; Ryan & Johnson, 2000; Torres, Stone, Butkus, Hook, Casey, & Arens, 2000). User participation in an evaluation’s design and activities is necessary, but not sufficient for the full potential of evaluation to be realized within organizations; that is, for it to facilitate learning and change on program, department, and organizational levels. In recent years evaluators have focused on understanding the role and practice of evaluation in the ongoing learning of the organization as a whole (see Forss, Cracknell, & Samset, 1994; Owen & Lambert, 1995; Owen & Rogers, 1999; Patton, 1997; Preskill & Torres, 1999, 2000; Russ-Eft & Preskill, 2001; Shulha, 2000; Sonnichsen, 1999; Torres, 1991; Torres, Preskill, & Piontek, 1996; Weiss, 1998). As Mathison (1994) explains, “One-shot program evaluations will not provide the information to address fundamental organizational traits and characteristics which influence all programs” (p. 304). Organizational learning is a continuous process of growth and improvement that (a) uses information or feedback about both processes and outcomes (i.e., evaluation findings) to make changes; (b) is integrated with work activities, and within the organization’s infrastructure (e.g., its culture, systems and structures, leadership, and communication mechanisms); and (c) invokes the alignment of values, attitudes, and perceptions among organizational members (Preskill & Torres, 1999; Torres, Preskill, & Piontek, 1996). It involves: ● ● ●







Establishing a balance between accountability and learning roles for evaluation; Integrating the evaluation function and evaluator role within the organization; Developing frameworks for relating findings about particular programs and initiatives to broader organizational goals; Sustaining a spirit of ongoing inquiry which calls for learning incrementally and iteratively over time; Providing time for reflection, examination of underlying assumptions, and dialog among evaluators, program staff, and organizational leaders; and Reconsidering traditional evaluator roles and the skills evaluators need.

Downloaded from aje.sagepub.com at UNIV TORONTO on December 2, 2015

Evaluation and Organizational Learning

389

CHALLENGES TO AN ORGANIZATIONAL LEARNING APPROACH TO EVALUATION When these circumstances exist, organizations have the opportunity to fully deploy evaluation in service of their missions. Still, however, evaluators and organizations interested in this vision for evaluation face significant challenges, including: ● ●



● ● ● ● ●

Accountability-hungry funders and legislators who continue to demand outcomes within short periods of time; Employees within organizations who are given little time or support for engaging in reflection and dialog that invites questioning about the underlying assumptions, beliefs, and values of the organization’s programs, policies, and practices; Leaders who have little experience in basing decisions on data and don’t know how to incorporate systematically derived findings with other forms of information into their decision-making processes; Overworked program staffs that continue to see evaluation as a nonessential, add-on activity for which they have little time; The difficulty of locating evaluators who have an interest in, and the ability to implement evaluation as a means for, learning and organizational change; Little support for redesigning jobs and/or influencing organizational culture to sustain organizational learning; Organization members who may view evaluation as threatening, and remain uncomfortable with group dialog designed to facilitate learning from evaluation; and Midlevel employees who seek to initiate evaluation work and encounter difficulty getting upper-management support.

STAGES OF CHANGE TOWARD AN ORGANIZATIONAL LEARNING APPROCH TO EVALUATION In thinking about how to overcome these obstacles we are reminded that significant shifts in professional practice tend to evolve over time, beginning among a few and growing as the benefits of a particular new approach are sought by increasing numbers. At the risk of oversimplifying the current and near-term future of evaluation practice within organizations, we propose considering these shifts in terms of the stages of change that organizations or systems undertake in fully actualizing a new approach or innovation (see Hall, Loucks, Rutherford, & Newlove, 1975; Lewin, 1952; Rogers, 1995). The stages are: (1) status quo (or in this case, what might be thought of as more or less traditional evaluation practice), (2) awareness of a need to change and the exploration of a new approach to evaluation, (3) transitioning to an organizational learning approach, (4) adoption and implementation of an organizational learning approach, and (5) predominance and refinement of the approach. Movement through these stages is mediated by a variety of factors, only some of which are within the control of those actively seeking or being encouraged to change. For an increasing number of organizations to make this transition from a more traditional evaluation approach to an organizational learning approach, a number of things need to happen both within and outside of the evaluation profession. In the following sections, we provide an (a)

Downloaded from aje.sagepub.com at UNIV TORONTO on December 2, 2015

390

AMERICAN JOURNAL OF EVALUATION, 22(3), 2001

overview of what each stage of change means for evaluation practice, and (b) an outline of some ideas about how to further evaluators’ and organizations’ movement through the stages. To help describe its essence we introduce each stage with a Russian proverb (Dubrovin, 1993). Traditional Evaluation Approach There will come a time when the seed will sprout. (Do not trouble yourself about future problems and difficulties, but wait till you have to deal with them; then will be the time to worry about them, not now.)

Some evaluators and their clients (internal or external) are practicing what we think of as a traditional evaluation approach; that is, one where the evaluator is the outside expert and there is little stakeholder involvement. In addition to lack of perceived need (i.e., client satisfaction with current practice), reasons for this include: ● ● ● ●

A desire for perceived objectivity; Lack of training, skills, and expertise (in collaboration and facilitation) among evaluators; Lack of resources for making evaluation work more inclusive and collaborative, particularly with large-scale, multisite evaluations; and Lack of awareness among evaluation clients that other approaches are available/ appropriate, and could be beneficial in particular ways.

Oftentimes when evaluators and others speak of a traditional evaluation approach it primarily means that the evaluation in question uses an experimental or quasi-experimental design and collects quantitative data to which rigorous statistical analyses can be applied. We want to stress that in labeling an approach as traditional, we are not so concerned about the methodology it uses, but rather the extent to which it is participatory and designed to maximize reflection and dialog about evaluation findings, and where appropriate, near- or long-term action among its stakeholders. In the next stage we describe how some organizations have begun to think about more inclusive and learning-oriented evaluation. Awareness of Need to Change and Exploration of a New Approach There is no evil without good. (In every trouble and difficulty there is hope or expectation of an improvement in the circumstances; a misfortune may turn into a benefit.)

For individuals, groups, organizations, and entire professions, change is often precipitated by pain and difficulty. Crises within organizations that stimulate evaluation clients’ need for and interest in exploring alternative approaches are well-documented in the organizational learning literature (see Argyris, 1992; Argyris & Schon, 1996; Dixon, 1994, 1999; Driscoll & Preskill, 1996; Kim, 1995; Lackey, 2000; Robbins, 2001; Senge, 1990). In some cases traditional, well-funded evaluations have failed to contribute to ongoing learning and to provide adequate explanations of findings. In other cases, where evaluation was not necessarily taking place at all, instances of organizational failure revealed the need for feedback about processes, outcomes, and progress toward overall goals. Some of these

Downloaded from aje.sagepub.com at UNIV TORONTO on December 2, 2015

Evaluation and Organizational Learning

391

organizations, not familiar with evaluation in general, are naturally drawn to a learningfocused approach. In yet other organizations, the need for exploring expanded evaluation approaches is being driven by increasing accountability demands from government and other funders. These organizations know they must comply with the demand, but also want to maximize the utility of their efforts. As we describe in this article, evaluators are articulating approaches for integrating evaluation with ongoing learning in organizations. By definition, however, the success of this approach is limited to the extent that organization members are not interested and committed to it. In addition to the circumstances explained immediately above, one way to stimulate organizations’ interest and commitment is for their own disciplines (e.g., teaching, educational leadership, nonprofit management, business management, grant making, medicine, nursing, etc.) to support and teach about professional practice and leadership which effectively uses evaluation for continuous improvement. It follows that when clients or users, and not evaluators, are defining evaluation purposes, it is for maximally useful feedback about the effectiveness of their work. Finally, interest in and exploration of a learning approach to evaluation within organizations is evident in current journal articles, the popularity of trade books, recent American Evaluation Association conference themes, and participation in national and international professional development offerings on these topics. Transitioning to an Organizational Learning Approach Every seed knows its time. (One should not be impatient and hasten events; everything will work out after some time, but not immediately.)

Evidence that some organizations have moved beyond interest to transitioning toward an organizational learning approach to evaluation also exists. Perusal of the American Evaluation Association’s job announcements occasionally yields position titles and job descriptions highlighting the expectation for evaluation to serve an organizational learning function. Participants at professional development workshops on learning-centered evaluation can readily describe what their organizations are doing in this way and why they have come to learn more. Some of their organizations have articulated philosophies and policies about evaluation use; that is, about ongoing learning and continuous improvement based on routine and systematic inquiry. Yet, to help more organizations transition successfully, there is need for continued understanding and dialog about how existing tools, methods, and frameworks for learningcentered evaluation operate in practice. Additionally, linkages and mutual learning between the evaluation and organizational development professions can help practitioners in both fields support the cultural shifts and technical assistance needed for organizations to make the transition (The David and Lucille Packard Foundation and The James Irvine Foundation, 2001). Adoption and Implementation of an Organizational Learning Approach A drop hollows out a stone. (Persistence will achieve a difficult objective.)

Downloaded from aje.sagepub.com at UNIV TORONTO on December 2, 2015

392

AMERICAN JOURNAL OF EVALUATION, 22(3), 2001

Despite the challenges described earlier, some organizations (including several with which we are involved in our own evaluation practice) have adopted and are successfully implementing organizational learning approaches to evaluation. In these organizations the practices described in this article are routine, and organizational resources have been marshaled or reallocated to assure their continued use. The following catalysts both from within and outside the profession can help more organizations adopt and implement an organizational learning approach to evaluation: ● ● ●



● ●



Guidelines developed for use of evaluation as a tool for organizational effectiveness; An increased pool of diverse evaluators able to provide relevant, customized, and reflective evaluation services useful for increasing organizational effectiveness; Increased professional development opportunities, including graduate courses, workshops, seminars, internships, and apprenticeships that provide training in not only basic evaluation but also in interpersonal communications, team development, group process, consulting, and organizational behavior and change; Funders and legislators who accept the realities of incremental change and accept near-term indicators of progress toward longer-term outcomes, thereby encouraging practitioners to focus on understanding the linkages between program activities and intended experiences for program participants, and on modifying them accordingly; Organizational leaders who embrace the role and importance of evaluation in strategic decision making, and are available to participate in inquiry efforts; Organizations that routinely provide various kinds of learning opportunities to help employees participate successfully in inquiry processes (e.g., team building, conflict management, deliberation); and Organizations that operate from a systems orientation with the infrastructure for cross-functional learning (The David and Lucille Packard Foundation and The James Irvine Foundation, 2001).

Predominance and Refinement of Organizational Learning Approach The appetite comes during a meal. (Desire or facility increases as an activity proceeds.)

Predominance and refinement of this approach will naturally occur when, as part of their ongoing work, evaluators and clients collaboratively reflect on the inquiry processes themselves. In terms of a wider conversation, through conference and other activities in their professional organizations, evaluators have numerous venues for thinking, writing, and dialoging about both the theoretical and the practical sides of their work. Increasingly we are inviting clients and users into this conversation, and we as evaluators are being invited into the professional dialog within the disciplines of our clients and users. This is a trend we should continue. We look forward to where these conversations will lead us as we experiment with different strategies for realizing the vision described here. CONCLUSIONS In considering a learning-oriented approach within organizations we want to emphasize that not all evaluators will be comfortable with the more actively-involved change agent role that

Downloaded from aje.sagepub.com at UNIV TORONTO on December 2, 2015

Evaluation and Organizational Learning

393

it requires. This approach blends organizational development with evaluation, and evaluation with program work. Nor will all organizations have the resources or desire to undertake it. And again, the approach may be less appropriate for large-scale, policy-oriented evaluation where the utmost rigor and perceived objectivity are viewed as essential by particular evaluators and/or particular clients. We do believe, however, that many evaluation efforts— inside and outside of organizations— can be enhanced by increasing the connection to the decision-making context within which the evaluation is being conducted and by involving stakeholders in the interpretation and meaning of findings, and development of next steps. We are confident that by taking a learning approach within organizations, in particular, evaluation can significantly support efforts to learn, grow, and take appropriate action through: (a) a focus on key issues and concerns; (b) dialog and reflection about how to improve, which includes implementers through senior-level decision makers, and considers underlying assumptions, values, and beliefs; (c) the courage to face what may feel like harsh realities; and (d) incisive and realistic assessments of how we can stretch ourselves to move forward based upon what we know about our past, our current situation, and the likely future we are facing.

REFERENCES Alkin, M. C. (1980). Naturalistic study of evaluation utilization. New Directions for Program Evaluation, 5, 19 –27. Alkin, M., Daillak, R., & White, P. (1979). Using evaluations: Does evaluation make a difference? Beverly Hills, CA: Sage. Argyris, C. (1992). On organizational learning. Cambridge, MA: Blackwell. Argyris, C., & Schon, D. A. (1996). Organizational learning II. Reading, MA: Addison-Wesley. Brandon, P. R. (1998). Stakeholder participation for the purpose of helping ensure evaluation validity: Bridging the gap between collaborative and non-collaborative evaluations. American Journal of Evaluation, 19, 325–337. Brett, B., Hill-Mead, L., & Wu, S. (2000). Perspectives on evaluation use and demand by users: The case of City Year. New Directions for Program Evaluation, 88, 71– 83. Brunner, I., & Guzman, A. (1989). Participatory evaluation: A tool to assess projects and empower people. New Directions for Program Evaluation, 42, 9 –17. Chelimsky, E. (1986). What have we learned about the politics of program evaluation? Evaluation Practice, 8, 5–21. Cousins, J. B., Donohue, J. J., & Bloom, G. A. (1996). Collaborative evaluation in North America: Evaluators’ self-reported opinions, practices, and consequences. Evaluation Practice, 17, 207– 226. Cousins, J. B., & Earl, L. E. (1992). The case for participatory evaluation. Educational Evaluation and Policy Analysis, 14, 397– 418. Cousins, J. B., & Earl, L. E. (1995). Participatory evaluation in education. London: The Falmer Press. Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research, 56, 331–364. Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. New Directions for Evaluation, 80, 5–23. Daillak, R. (1982). What is evaluation utilization? Studies in Educational Evaluation, 8, 157–162. The David and Lucille Packard Foundation. (2001). Improving the practice and use of evaluation to advance the mission of philanthropies and non-profits. (Working Paper) Los Altos, CA and San Francisco, CA: Author.

Downloaded from aje.sagepub.com at UNIV TORONTO on December 2, 2015

394

AMERICAN JOURNAL OF EVALUATION, 22(3), 2001

Dixon, N. (1994). The organizational learning cycle. London: McGraw-Hill. Dixon, N. (1999). Learning across organizational boundaries: A case study of Canadian museums. In M. Easterby-Smith, J. Burgoyne, & L. Araujo (Eds.), Organizational learning and the learning organization (pp. 115–129). London: Sage. Driscoll, M., & Preskill, H. (1996). The journey toward becoming a learning organization—Are we almost there? In K. Watkins & V. Marsick (Eds.), Creating the learning organization, Volume 1 (pp. 67– 80). Alexandria, VA: American Society for Training and Development. Dubrovin, M. (1993). Picture book of idioms in five languages. Moscow: Prosvesheniye. Fetterman, D. M. (1994). Steps of empowerment evaluation: From California to Cape Town. Evaluation and Program Planning, 17, 305–313. Fetterman, D. M. (1996). Empowerment evaluation: An introduction to theory and practice. In D. M. Fetterman, S. J. Kaftarian, & A. Wandersman (Eds.), Empowerment evaluation: Knowledge and tools for self-assessment and accountability (pp. 3-46). Thousand Oaks, CA: Sage. Fitzpatrick, J. (1998). Conversation with Marsha Mueller. American Journal of Evaluation, 19, 87–99. Forss, K., Cracknell, B., & Samset, K. (1994). Can evaluation help an organization to learn? Evaluation Review, 18, 574 –591. Greene, J. C. (1987). Stakeholder participation in evaluation design: Is it worth the effort? Evaluation and Program Planning, 10, 375–394. Greene, J. C. (1988). Stakeholder participation and utilization in program evaluation. Evaluation Review, 12, 91–116. Hall, G. E., Loucks, S. F., Rutherford, W. L., & Newlove, B. W. (1975). Levels of use of the innovation: A framework for analyzing innovation adoption. Journal of Teacher Education, 26, 5–9. Hueber, T. (2000). Theory-based evaluation: Gaining a shared understanding between school staff and evaluators. New Directions for Program Evaluation, 87, 79 – 89. Johnson, R. B. (1998). Toward a theoretical model of evaluation utilization. Evaluation and Program Planning, 21, 93–110. Kim, D. H. (1995). Managerial practice fields: Infrastructures of a learning organization. In S. Chawla & J. Renesch (Eds.), Learning organizations: Developing cultures for tomorrow’s workplace (pp. 351–363). Portland, OR: Productivity Press. King, J. A. (1988). Research on evaluation use and its implications for evaluation research and practice. Studies in Educational Evaluation, 14, 285–299. Lackey, R. (2000). The role of the chief learning officer: Implications for theory and practice. Unpublished doctoral dissertation, University of New Mexico. Leviton, L. C., & Hughes, E. F. X. (1981). Research on the utilization of evaluations. Evaluation Review, 5, 525–548. Lewin, K. (1952). Field theory in social science. London: Tavistock. Mathison, S. (1991). Role conflicts for internal evaluators. Evaluation and Program Planning, 14(3), 173–179. Mathison, S. (1994). Rethinking the evaluator role: Partnerships between organizations and evaluators. Evaluation and Program Planning, 17(3), 299 –304. Mueller, M. (1998). The evaluation of Minnesota’s Early Childhood Family Education Program. American Journal of Evaluation, 19, 80 – 86. Newman, D. L., Brown, R. D., & Rivers, L. S. (1983). Locus of control and evaluation use: Does sense of control affect information needs and decision making? Studies in Educational Evaluation, 9, 77– 88. O’Sullivan, R. G., & O’Sullivan, J. M. (1998). Evaluation voices: Promoting evaluation from within programs through collaboration. Evaluation and Program Planning, 21, 21–29. Owen, J. M., & Rogers, P. (1999). Program evaluation: Forms and approaches. Thousand Oaks, CA: Sage. Owen, J. M., & Lambert, F. C. (1995). Roles for evaluation in learning organizations. Evaluation, 1, 259 –273.

Downloaded from aje.sagepub.com at UNIV TORONTO on December 2, 2015

Evaluation and Organizational Learning

395

Patton, M. Q. (1978). Utilization-focused evaluation. Beverly Hills, CA: Sage. Patton, M. Q. (1986). Utilization-focused evaluation. (2nd ed.). Newbury Park, CA: Sage. Patton, M. Q. (1997). Utilization-focused evaluation: The new century text. Thousand Oaks, CA: Sage. Preskill, H., & Caracelli, V. (1997). Current and developing conceptions of use: Evaluation use TIG survey results. Evaluation Practice, 18, 209 –225. Preskill, H., & Torres, R. T. (1999). Evaluative inquiry for learning in organizations. Thousand Oaks, CA: Sage. Preskill, H., & Torres, R. T. (2000). The learning dimension of evaluation use. New Directions for Program Evaluation, 88, 25–37. Rich, R. F. (1977). Uses of social science information by federal bureaucrats: Knowledge for action versus knowledge for understanding. In C. Weiss (Ed.), Using social research in public policy making. Lexington, MA: Lexington Books. Robbins, S. A. (2001). How organizations learn from experience: An empirical exploration of organizational intelligence and learning. Unpublished doctoral dissertation, University of New Mexico. Rogers, E. (1995). Diffusion of innovations (4th ed.) New York: Free Press. Russ-Eft, D., & Preskill, H. (2001). Evaluation in organizations: A systematic approach to enhancing learning, performance, and change. Boston, MA: Perseus Books. Ryan, K. E., & Johnson, T. D. (2000). Democratizing evaluation: Meanings and methods from practice. New Directions for Program Evaluation, 85, 39 –50. Senge, P. M. (1990). The fifth discipline. New York: Doubleday. Shulha, L. M. (2000). Evaluative inquiry in university-school professional learning partnerships. New Directions for Program Evaluation, 88, 39 –53. Shulha, L. M., & Cousins, B. (1997). Evaluation use: Theory, research, and practice since 1986. Evaluation Practice, 18, 195–208. Sonnichsen, R. (1999). High impact internal evaluation: A practitioner’s guide to evaluating and consulting inside organizations. Thousand Oaks, CA: Sage. Torres, R. T. (1991). Improving the quality of internal evaluation: The consultant-mediator approach. Evaluation and Program Planning, 14, 189 –198. Torres, R. T., Preskill, H., & Piontek, M (1996). Evaluation strategies for communicating and reporting: Enhancing learning in organizations. Thousand Oaks, CA: Sage. Torres, R. T., Preskill, H. S., & Piontek, M. E. (1997). Communicating and reporting: Practices and concerns of internal and external evaluators. Evaluation Practice, 18, 105–125. Torres, R. T., Stone, S. P., Butkus, D., Hook, B., Casey, J., & Arens, S. A. (2000). Dialogue and reflection in a collaborative evaluation: Stakeholder and evaluator voices. In K. Ryan & L. Destefano (eds.), Evaluation as a democratic process: Promoting inclusion, dialogue, and deliberation. New Directions for Evaluation, Number 85 (pp. 27–38). San Francisco: Jossey-Bass. Weiss, C. H. (1981). Measuring the use of evaluation. In J. Ciaro (Ed.), Utilizing evaluations (pp. 17–33). Beverly Hills, CA: Sage. Weiss, C. H. (1987). Where politics and evaluation research meet. In D. J. Palumbo (Ed.), The politics of program evaluation (pp. 47–72). Newbury Park, CA: Sage. Weiss, C. H. (1998). Evaluation (2nd ed.). Upper Saddle River, NJ: Prentice-Hall.

Downloaded from aje.sagepub.com at UNIV TORONTO on December 2, 2015

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.