Applied research

advertisement

APPLIED SCIENCES

&

APPLIED RESEARCH

Smno-pdklp-2014

.

The Harvard School of Engineering and Applied

Sciences (SEAS)

.

SEAS Engineering and Applied Science

Disciplines for the

21 st Century (in the wheel) and some of the collaborative areas amongst them

(on the outside of the wheel).

Diunduh dari: http://www.seas.harvard.edu/about-seas/facts-history/seastoday….. 20/9/2012

RISET TERAPAN

"The hardest problems of pure and applied science can only be solved by the open collaboration of the world-wide scientific community."

Kenneth G Wilson.

Diunduh dari: http://www.longeaton.derbyshire.sch.uk/learning/curriculum_areas/sciences/btec_applied_science….. 20/9/2012

.

Scope and Importance of Environmental Studies

.

Because of environmental studies has been seen to be multidisciplinary in nature so it is considered to be a subject with great scope.

Environment age not limited to issues of sanitation and health but it is now concerned with pollution control, biodiversity conservation, waste management and conservation of natural resources..

Diunduh dari: http://brawin.blogspot.com/2011/06/scope-and-importance-ofenvironmental.html….. 20/9/2012

PENELITIAN:

Adalah kegiatan yang dilakukan menurut kaidah dan metode ilmiah secara sistematis untuk memperoleh informasi, data dan keterangan yang berkaitan dengan pemahaman dan/atau pengujian suatu cabang ilmu pengetahuan dan teknologi.

UURI No. 12 tahun 2012: Pendidikan Tinggi

Pasal 1 butir 10.

Diunduh dari : ….. 16/9/2012

RUMPUN IPTEK

Merupakaqn kumpulan sejumlah pohon, cabang dan ranting ilmu pengetahuan yang disusun secara sistematis

(UURI No 12 tahun 2012: Pasal 10 (1).

Rumpun IPTEK terdiri atas:

1. Rumpun Ilmu Agama

2. Rumpun Ilmu Humaniora

3. Rumpun Ilmu Sosial

4. Rumpun Ilmu Alam

5. Rumpun Ilmu Formal,

6. Rumpun Ilmu Terapan.

(UURI No 12 tahun 2012: Pasal 10 (2).

.

Rumpun Ilmu Terapan.

.

Rumpun ilmu terapan merupakan rumpun IPTEK yang mengkaji dan mendalami aplikasi ilmu bagi kehidupan manusia antara lain:

pertanian, arsitektur dan perencanaan, bisnis, pendidikan, teknik, kehutanan dan lingkungan, keluarga dan konsumen, kesehatan, olahraga, jurnalistik, media masa dan komunikasi, hukum, perpustakaan dan permuseuman, militer, administrasi publ;ik, pekerja sosial, dan transportasi.

UURI No. 12 tahun 2012. Penjelasan Pasal 10 Ayat 2 Huruf f.

.

APPLIED RESEARCH

.

Applied research is a form of systematic inquiry involving the practical application of science. It accesses and uses some part of the research communities' (the academy's) accumulated theories, knowledge, methods, and techniques, for a specific, often state-, business-, or client-driven purpose.

Applied research deals with solving practical problems and generally employs empirical methodologies. Because applied research resides in the messy real world, strict research protocols may need to be relaxed.

For example, it may be impossible to use a random sample. Thus, transparency in the methodology is crucial. Implications for interpretation of results brought about by relaxing an otherwise strict canon of methodology should also be considered.

Diunduh dari : http://en.wikipedia.org/wiki/Applied_research….. 16/9/2012

.

Three forms of research

.

Frascati Manual outlines three forms of research. These are basic research, applied research and experimental development:

1. Basic research is experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundation of phenomena and observable facts, without any particular application or use in view.

2. Applied research is also original investigation undertaken in order to acquire new knowledge. It is, however, directed primarily towards a specific practical aim or objective.

3. Experimental development is systematic work, drawing on existing knowledge gained from research and/or practical experience, which is directed to producing new materials, products or devices, to installing new processes, systems and services, or to improving substantially those already produced or installed.

Diunduh dari

.

.

APPLIED SCIENCE

.

Applied science is the application of human knowledge to build or design useful things.

Examples include testing a theoretical model through the use of formal science or solving a practical problem through the use of natural science.

Fields of engineering are closely related to applied sciences.

Applied science is important for technology development. Its use in industrial settings is usually referred to as research and development (R&D).

Applied science differs from fundamental science, which seeks to describe the most basic objects and forces, having less emphasis on practical applications. Applied science can be like biological science and physical science.

Diunduh dari : http://en.wikipedia.org/wiki/Applied_science ….. 16/9/2012

. APPLIED RESEARCH .

Applied research refers to scientific study and research that seeks to solve practical problems.

Applied research is used to find solutions to everyday problems, cure illness, and develop innovative technologies.

Psychologists working in human factors or industrial/organizational fields often do this type of research.

Diunduh dari : http://psychology.about.com/od/aindex/g/appres.htm ….. 16/9/2012

.

WHAT IS APPLIED RESEARCH?

.

Applied research is designed to solve practical problems of the modern world, rather than to aqcquire knowledge for knowledge's sake. One might say that the goal of the applied scientist is to

improve the human condition .

Misalnya, riset-terapan mengkaji cara-cara untuk:

1. Memperbaiki produktivitas pertanian

2. Memperlakukan atau merawat penyakit khusus

3. Memperbaikio efisiensi energi di rumah, kantor atau mode transportasi

Some scientists feel that the time has come for a shift in emphasis away from purely basic research and toward applied science. This trend, they feel, is necessitated by the problems resulting from global overpopulation, pollution, and the overuse of the earth's natural resources..

Diunduh dari: http://www.lbl.gov/Education/ELSI/researchmain.html ….. 16/9/2012

.

APPLIED RESEARCH

.

Neuman (2000) defines applied research as

"research that attempts to solve a concrete problem or address a specific policy question and that has a direct, practical application“.

Beberapa contoh riset terapan:

Action research, Pendugaan dampak sosial, dan Riset Evaluasi.

Diunduh dari : ….. 16/9/2012

RISET TERAPAN

Menjawab pertanyaan-pertanyaan praktis, spesifik lokasi-waktu

Dapat bersifat eksploratori atau deskriptif

Melibatkan pengukuran yang akurat dan mendeskripsikan hubungan antar peubahpeubah dari fenomena yang dipelajari

RISET TERAPAN

Dapat dilakukan oleh institusi akademik atau institusi industri-bisnis

Penelitian diarahkan “untuk menemukan pengetahuan ilmiah baru yang mempunyai tujuan komersial khusus dalam konteks produk, proses atau layananjasa”.

RISET TERAPAN

• Contoh-contoh pertanyaam penelitian terapan:

– Bagaimana tanaman padi di Indonesia dapat dilindungi dari gangguan hama wereng?

– Vaksin apa yang paling efektif dan efisien dalam melawan influenza?

– Bagaimana kebun apel di Batu dapat dilindungi dari dampak perubahan iklim global?

.EVALUATION RESEARCH.

Evaluasi merupakan bidang-metodologi yang berhubungan erat dengan riset-riset sosial , tetapi masih dapat dibedakan.

Evaluation utilizes many of the same methodologies used in traditional social research, but because evaluation takes place within a political and organizational context, it requires group skills, management ability, political dexterity, sensitivity to multiple stakeholders and other skills that social research in general does not rely on as much.

Diunduh dari : http://www.socialresearchmethods.net/kb/intreval.php ….. 16/9/2012

.

Definitions of Evaluation

.

Definisi yang paling sering digunakan :

Evaluasi = pendugaan sistematis terhadap kebaikan atau keburukan suatyu obyek

This definition is hardly perfect. There are many types of evaluations that do not necessarily result in an assessment of worth or merit -- descriptive studies, implementation analyses, and formative evaluations, to name a few.

Better perhaps is a definition that emphasizes the informationprocessing and feedback functions of evaluation:

Evaluation = akuisisi dan assessment informasi untuk memberikan umpan balik yang manfaat tentang obyek-

obyek tertentu.

.

The Goals of Evaluation

.

The generic goal of most evaluations is to provide "useful feedback" to a variety of audiences including sponsors, donors, client-groups, administrators, staff, and other relevant constituencies.

Most often, feedback is perceived as "useful" if it aids in decisionmaking. But the relationship between an evaluation and its impact is not a simple one -- studies that seem critical sometimes fail to influence short-term decisions, and studies that initially seem to have no influence can have a delayed impact when more congenial conditions arise.

Despite this, there is broad consensus that the major goal of evaluation should be to influence decision-making or policy formulation through the provision of empirically-driven feedback.

.

Evaluation Strategies

.

Four major groups of evaluation strategies are :

Scientific-experimental models

are probably the most historically dominant evaluation strategies.

Taking their values and methods from the sciences -- especially the social sciences -- they prioritize on the desirability of impartiality, accuracy, objectivity and the validity of the information generated.

Model-model yang termasuk dalam “scientific-experimental” :

1. Experimental dan quasi-experimental designs;

2. Objectives-based research that comes from education;

3. Econometrically-oriented perspectives including costeffectiveness and cost-benefit analysis; and

4. The theory-driven evaluation.

.

Evaluation Strategies

.

The

management-oriented systems models.

Two of the most common of these are PERT, the Program

Evaluation and Review Technique, and CPM, the Critical Path

Method.

Two management-oriented systems models were originated by evaluators:

1. Model UTOS : where U stands for Units, T for Treatments, O for

Observing Observations and S for Settings; and

2. the CIPP model where the C stands for Context, the I for Input, the first P for Process and the second P for Product.

These management-oriented systems models emphasize comprehensiveness in evaluation, placing evaluation within a larger framework of organizational activities.

Diunduh dari : ….. 16/9/2012

.

Evaluation Strategies

.

The qualitative/anthropological models.

They emphasize the importance of observation, the need to retain the phenomenological quality of the evaluation context, and the value of subjective human interpretation in the evaluation process.

Included in this category are :

1. the approaches known in evaluation as naturalistic or 'Fourth

Generation' evaluation;

2. the various qualitative schools;

3. critical theory and art criticism approaches; and,

4. the 'grounded theory' approach.

Diunduh dari : ….. 16/9/2012

.

Evaluation Strategies

.

The participant-oriented models.

As the term suggests, they emphasize the central importance of the evaluation participants, especially clients and users of the program or technology.

Client-centered and stakeholder approaches are examples of participant-oriented models, as are consumer-oriented evaluation systems.

Diunduh dari : ….. 16/9/2012

.

Types of Evaluation

.

Formative evaluation types:

1. Needs assessment determines who needs the program, how great the need is, and what might work to meet the need

2. Evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness

3. Structured conceptualization helps stakeholders define the program or technology, the target population, and the possible outcomes

4. Implementation evaluation monitors the fidelity of the program or technology delivery

5. Process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures

Diunduh dari : ….. 16/9/2012

.

Types of Evaluation

.

Summative evaluation :

1. Outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes

2. Impact evaluation is broader and assesses the overall or net effects -- intended or unintended -- of the program or technology as a whole

3. Cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values

4. Secondary analysis reexamines existing data to address new questions or use methods not previously employed

5. Meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgement on an evaluation question .

Diunduh dari : ….. 16/9/2012

.

Evaluation Questions and Methods

.

In

FORMATIVE RESEARCH the major questions and methodologies are:

What is the definition and scope of the problem or issue, or what's the question?

1. Formulating and conceptualizing methods might be used including brainstorming, focus groups, nominal group techniques, Delphi methods, brainwriting, stakeholder analysis, synectics, lateral thinking, input-output analysis, and concept mapping.

Where is the problem and how big or serious is it?

1. The most common method used here is "needs assessment" which can include: analysis of existing data sources, and the use of sample surveys, interviews of constituent populations, qualitative research, expert testimony, and focus groups.

Diunduh dari : ….. 16/9/2012

.

Evaluation Questions and Methods

.

In

FORMATIVE RESEARCH the major questions and methodologies are:

How should the program or technology be delivered to address the problem?

1. Some of the methods already listed apply here, as do detailing methodologies like simulation techniques, or multivariate methods like multiattribute utility theory or exploratory causal modeling; decision-making methods; and project planning and implementation methods like flow charting, PERT/CPM, and project scheduling.

How well is the program or technology delivered?

1. Qualitative and quantitative monitoring techniques, the use of management information systems, and implementation assessment would be appropriate methodologies here.

Diunduh dari

.

The questions and methods under

SUMMATIVE EVALUATION

:

What type of evaluation is feasible?

1. Evaluability assessment can be used here, as well as standard approaches for selecting an appropriate evaluation design.

What was the effectiveness of the program or technology?

1.

One would choose from observational and correlational methods for demonstrating whether desired effects occurred, and quasi-experimental and experimental designs for determining whether observed effects can reasonably be attributed to the intervention and not to other sources.

What is the net impact of the program?

1. Econometric methods for assessing cost effectiveness and cost/benefits would apply here, along with qualitative methods that enable us to summarize the full range of intended and unintended impacts.

Diunduh dari : ….. 16/9/2012

.

The Planning-Evaluation Cycle

.

The planning process could involve any or all of these stages:

1.

the formulation of the problem, issue, or concern;

2.

the broad conceptualization of the major alternatives that might be considered;

3.

the detailing of these alternatives and their potential implications;

4.

the evaluation of the alternatives and the selection of the best one; and

5.

the implementation of the selected alternative..

Diunduh dari : ….. 16/9/2012

.

External Validity – Evaluation Research

.

External validity is related to generalizing.

Validity refers to the approximate truth of propositions, inferences, or conclusions.

External validity refers to the approximate truth of conclusions the involve generalizations.

External validity is the degree to which the conclusions in your study would hold for other persons in other places and at other times.

Diunduh dari : ….. 16/9/2012

.

Improving External Validity

.

How can we improve external validity?

1.

The sampling model, suggests that you do a good job of drawing a sample from a population. For instance, you should use random selection, if possible, rather than a nonrandom procedure. And, once selected, you should try to assure that the respondents participate in your study and that you keep your dropout rates low.

2.

Use the theory of proximal similarity more effectively. How? Perhaps you could do a better job of describing the ways your contexts and others differ, providing lots of data about the degree of similarity between various groups of people, places, and even times. You might even be able to map out the degree of proximal similarity among various contexts with a methodology like concept mapping.

Perhaps the best approach to criticisms of generalizations is simply to show them that they're wrong -- do your study in a variety of places, with different people and at different times. The external validity (ability to generalize) will be stronger the more you replicate your study.

Diunduh dari : ….. 16/9/2012

.

THE PROXIMAL SIMILARITY MODEL

.

.

'Proximal' means 'nearby' and

'similarity' means 'similarity'.

The term proximal similarity was suggested by Donald T. Campbell as an appropriate relabeling of the term external validity.

Under this model, we begin by thinking about different generalizability contexts and developing a theory about which contexts are more like our study and which are less so.

For instance, we might imagine several settings that have people who are more similar to the people in our study or people who are less similar..

Diunduh dari : ….. 16/9/2012

.

Sampling Model

.

.

In the sampling model, you start by identifying the population you would like to generalize to.

Then, you draw a fair sample from that population and conduct your research with the sample.

Finally, because the sample is representative of the population, you can automatically generalize your results back to the population.

There are several problems with this approach.

1. Perhaps you don't know at the time of your study who you might ultimately like to generalize to.

2. You may not be easily able to draw a fair or representative sample.

3. It's impossible to sample across all times that you might like to generalize to (like next year).

Diunduh dari : http://www.socialresearchmethods.net/kb/external.php ….. 16/9/2012

.

Measurement

.

Measurement is the process observing and recording the observations that are collected as part of a research effort.

1.

You have to understand the fundamental ideas involved in measuring. Here we consider two of major measurement concepts. Levels of Measurement , explain the meaning of the four major levels of measurement: nominal, ordinal, interval and ratio. Then we move on to the reliability of measurement, including consideration of true score theory and a variety of reliability estimators.

2.

You have to understand the different types of measures that you might use in social research.

We consider four broad categories of measurements:

1.

Survey research includes the design and implementation of interviews and questionnaires.

2.

Scaling involves consideration of the major methods of developing and implementing a scale.

3.

Qualitative research provides an overview of the broad range of non-numerical measurement approaches.

4.

Unobtrusive measures presents a variety of measurement methods that don't intrude on or interfere with the context of the research.

Diunduh dari : ….. 16/9/2012

.

Survey Research

.

Survey research is one of the most important areas of measurement in applied social research.

The broad area of survey research encompasses any measurement procedures that involve asking questions of respondents.

A "survey" can be anything form a short paper-and-pencil feedback form to an intensive one-on-one in-depth interview.

Types of surveys are divided into two broad areas:

Questionnaires and Interviews.

Diunduh dari : ….. 16/9/2012

Interviews

Interviews are a far more personal form of research than questionnaires.

In the personal interview, the interviewer works directly with the respondent. Unlike with mail surveys, the interviewer has the opportunity to probe or ask follow-up questions.

Interviews are generally easier for the respondent, especially if what is sought is opinions or impressions.

Interviews can be very time consuming and they are resource intensive.

The interviewer is considered a part of the measurement instrument and interviewers have to be well trained in how to respond to any contingency.

Diunduh dari : ….. 16/9/2012

.IMPACT ASSESSMENT RESEARCH.

The Impact Assessment Research Centre (IARC) at the University of Manchester aims to promote knowledge and practice of Impact

Assessment.

The increasing interest in evidence-based policy-making has raised new challenges and debates among impact assessment researchers and practitioners.

By encouraging an integrated approach to impact assessment, the

IARC seeks to strengthen the linkages between different impact assessment methodologies and practices.

The work of IARC is multidisciplinary, and recognises that sustainable development can only be achieved on the basis of a balanced, context and time specific assessment of the economic, social and environmental impacts of policies, programmes and projects.

Diunduh dari : http://www.sed.manchester.ac.uk/research/iarc/ ….. 16/9/2012

The Impact Assessment Research Centre (IARC)

The Impact Assessment Research Centre (IARC) in IDPM specialises in the integrated assessment of the economic, social and environmental impacts on sustainable development of national, regional and international policies.

Its current research programme includes sustainability impact assessment (SIA) of global and regional trade agreements, the effectiveness of national sustainable development strategies (NSDS), and regulatory impact assessment (RIA) of draft legislation and other policy measures..

Diunduh dari: pdf.usaid.gov/pdf_docs/PNADN201.pdf ….. 16/9/2012

.

COMMON PROBLEMS IN IMPACT ASSESSMENT

RESEARCH

.

Introduction

Doing an impact assessment of a private sector development (PSD) program is inherently challenging. Doing it the “right” way—such that it satisfies minimally acceptable methodological standards —is more challenging yet.

During the course of planning and implementing an impact assessment, it is not uncommon for researchers to confront any number of problems that have serious implications for impact assessment methodology and, consequently, the validity of its findings.

The impact assessment problems discussed in this paper include: timing, spillover effects, selection bias, capability of local research partners, and unanticipated external factors, such as climatic disasters.

Diunduh dari: pdf.usaid.gov/pdf_docs/PNADN201.pdf ….. 16/9/2012

.

COMMON PROBLEMS IN IMPACT ASSESSMENT

RESEARCH

.

Timing

The timing of the impact assessment may seriously affect the validity of its findings.

Ideally, a set aside for an impact assessment is incorporated into the original program budget that includes funding for a technical expert to set up the impact assessment early in the program cycle. More commonly, however, the decision to do an impact assessment occurs after the program is already underway. This can cause a number of problems. To begin with, the baseline may come too late to capture impacts that have already occurred, resulting in an understatement of actual program impacts. The longer the time lag between program launch and the baseline research, the greater the probability that the impact assessment fails to capture certain program impacts.

Even more striking examples of the problems resulting from delaying the start of research are provided by cases in which the impact assessment is done either near the end or after the end of a program. In these cases, there is no possibility of doing a baseline study, or, indeed, of getting any longitudinal data. Everything depends on a onetime set of research activities and often entails a heavy reliance on retrospective questions.

Diunduh dari: pdf.usaid.gov/pdf_docs/PNADN201.pdf ….. 16/9/2012

.

COMMON PROBLEMS IN IMPACT ASSESSMENT

RESEARCH

.

Spillover Effects

A second common problem occurs when program benefits spill over to non-program participants.

An example is the recently completed impact assessment of the

Cluster Access to Business Services (CABS) program in

Azerbaijan, which seeks to “improve profitability for clusters of rural poor and women micro-entrepreneurs by increasing access to a network of trained veterinary and production advice service providers . . . .”

The baseline study, conducted a year after program launch, showed significantly higher net profits for the treatment group veterinarians, but this difference had disappeared by the time the follow up study took place.

Diunduh dari: pdf.usaid.gov/pdf_docs/PNADN201.pdf ….. 16/9/2012

.

COMMON PROBLEMS IN IMPACT ASSESSMENT

RESEARCH

.

Selection Bias

One of the greatest challenges in doing a high quality impact assessment is identifying statistically valid treatment and control groups.

The best method of group selection is the experimental method in which membership in the treatment and control groups is determined via random assignment.

Where experimental methods are not feasible, quasi-experimental methods are a second-best alternative.

Diunduh dari: pdf.usaid.gov/pdf_docs/PNADN201.pdf ….. 16/9/2012

.

COMMON PROBLEMS IN IMPACT ASSESSMENT

RESEARCH

.

Ensuring Good Performance by the Local Research

Partner

Although it is not often emphasized, selecting the local research partner is one of the most important steps in the impact assessment process. Most developing countries have a variety of consulting firms, marketing research firms, research institutes, or universities with experience in local field research.

The capabilities of these local researchers, however, can vary considerably. A bad selection can result in higher costs; missed deadlines; greater frustration; poorer quality of work; strained relations with the program, program partners, and donors; questionable results; and, in extreme cases, failure of the research.

Diunduh dari: pdf.usaid.gov/pdf_docs/PNADN201.pdf ….. 16/9/2012

.

COMMON PROBLEMS IN IMPACT ASSESSMENT

RESEARCH

.

Unanticipated External Events

Even if an impact assessment is well planned, the methodology is sound, and the local research partner is competent, it may encounter outside-project factors that threaten or even wipe out the entire study.

One example is the impact assessment undertaken of the craft exporter project in Guatemala. In this case, the baseline research was successfully completed in

2003. The baseline survey included a sample of 1,529 producers of textile, ceramic, wood and leather goods of which 314 were affiliated with the project.

The analysis in 2006, however, was based on 56 affiliated producers and 105 non-affiliated textile producers who did not present the same demographic profile as the original textile producers..

Diunduh dari: pdf.usaid.gov/pdf_docs/PNADN201.pdf ….. 16/9/2012

. Social Impact Assessment tools and methods .

Analytical tools

1. STAKEHOLDER ANALYSIS is an entry point to SIA and participatory work. It addresses strategic questions, e.g. who are the key stakeholders? what are their interests in the project or policy? what are the power differentials between them? what relative influence do they have on the operation? This information helps to identify institutions and relations which, if ignored, can have negative influence on proposals or, if considered, can be built upon to strengthen them.

2. GENDER ANALYSIS focuses on understanding and documenting the differences in gender roles, activities, needs and opportunities in a given context. It highlights the different roles and behaviour of men and women. These attributes vary across cultures, class, ethnicity, income, education, and time; and so gender analysis does not treat women as a homogeneous group.

3. SECONDARY DATA REVIEW of information from previously conducted work is an inexpensive, easy way to narrow the focus of a social assessment, to identify experts and institutions that are familiar with the development context, and to establish a relevant framework and key social variables in advance.

Diunduh dari : http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

. Social Impact Assessment tools and methods .

Community-based methods

1.

Participatory Rural Appraisal (PRA) covers a family of participatory approaches and methods, which emphasises local knowledge and action. It uses to group animation and exercises to facilitate stakeholders to share information and make their own appraisals and plans. Originally developed for use in rural areas, PRA has been employed successfully in a variety of settings to enable local people to work together to plan communityappropriate developments.

2.

SARAR is an acronym of five attributes -- self-esteem, associative strength, resourcefulness, action planning and responsibility for follow-through -that are important for achieving a participatory approach to development.

SARAR is a philosophy of adult education and empowerment, which seeks to optimise people's ability to self-organize, take initiatives, and shoulder responsibilities. It is best classed as an experiential methodology, which involves setting aside hierarchical differences, team building through training, and learning from local experience rather than from external experts..

Diunduh dari : http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

. Social Impact Assessment tools and methods .

Consultation methods

Beneficiary Assessment (BA) is a systematic investigation of the perceptions of a sample of beneficiaries and other stakeholders to ensure that their concerns are heard and incorporated into project and policy formulation.

The purposes are to (a) undertake systematic listening, which

"gives voice" to poor and other hard-to-reach beneficiaries, highlighting constraints to beneficiary participation, and (b) obtain feedback on interventions.

Diunduh dari : http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

. Social Impact Assessment tools and methods .

Observation and interview tools

1. Participant Observation is a field technique used by anthropologists and sociologists to collect qualitative data and to develop in-depth understanding of peoples' motivations and attitudes. It is based on looking, listening, asking questions and keeping detailed field notes. Observation and analysis are supplemented by desk reviews of secondary sources, and hypotheses about local reality are checked with key local informants.

2. Semi-structured Interviews are a low-cost, rapid method for gathering information from individuals or small groups. Interviews are partially structured by a written guide to ensure that they are focused on the issue at hand, but stay conversational enough to allow participants to introduce and discuss aspects that they consider to be relevant.

3. Focus Group Meetings are a rapid way to collect comparative data from a variety of stakeholders. They are brief meetings -- usually one to two hours -- with many potential uses, e.g. to address a particular concern; to build community consensus about implementation plans; to cross-check information with a large number of people; or to obtain reactions to hypothetical or intended actions.

4. Village Meetings allow local people to describe problems and outline their priorities and aspirations.

They can be used to initiate collaborative planning, and to periodically share and verify information gathered from small groups or individuals by other means.

Diunduh dari : http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

. Social Impact Assessment tools and methods .

Participatory methods

1.

Role Playing helps people to be creative, open their perspectives, understand the choices that another person might face, and make choices free from their usual responsibilities. This exercise can stimulate discussion, improve communication, and promote collaboration at both community and agency levels.

2.

Wealth Ranking (also known as well-being ranking or vulnerability analysis) is a visual technique to engage local people in the rapid data collection and analysis of social stratification in a community (regardless of language and literacy barriers). It focuses on the factors which constitute wealth, such as ownership of or right to use productive assets, their relationship to locally powerful people, labour and indebtedness, and so on.

3.

Access to Resources is a tool to collect information and raise awareness of how access to resources varies according to gender, age, marital status, parentage, and so on. This information can make all the difference to the success or failure of a proposal; for example, if health clinics require users to pay cash fees, and women are primarily responsible for accompanying sick or pregnant family members to the clinic, then women must have access to cash.

Diunduh dari : http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

. Social Impact Assessment tools and methods .

Participatory methods

1. Analysis of Tasks clarifies the distribution of domestic and community activities by gender and the degree of role flexibility that is associated with each task. This is central to understanding the human resources that are necessary for running a community.

2. Mapping is an inexpensive tool for gathering both descriptive and diagnostic information. Mapping exercises are useful for collecting baseline data on a number of indicators as part of a beneficiary assessment or rapid appraisals, and can lay the foundation for community ownership of development planning by including different groups.

Diunduh dari : http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

. Social Impact Assessment tools and methods .

Participatory methods

1.

Needs Assessment draws out information about people's needs and requirements in their daily lives. It raises participants' awareness of development issues and provides a framework for prioritising actions and interventions. All sectors can benefit from participating in a needs assessment, as can trainers, project staff and field workers.

2.

Pocket Charts are investigative tools, which use pictures as stimulus to encourage people to assess and analyse a given situation. Made of cloth, paper or cardboard, pockets are arranged into rows and columns, which are captioned by drawings. A "voting" process is used to engage participants in the technical aspects of development issues, such as water and sanitation projects.

3.

Tree Diagrams are multi-purpose, visual tools for narrowing and prioritising problems, objectives or decisions. Information is organized into a tree-like diagram. The main issue is represented by the trunk, and the relevant factors, influences and outcomes are shown as roots and branches of the tree. .

Diunduh dari : http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

. Social Impact Assessment tools and methods .

Workshop-based methods

1. Objectives-Oriented Project Planning is a method that encourages participatory planning and analysis throughout the project life cycle. A series of stakeholder workshops are held to set priorities, and integrate them into planning, implementation and monitoring. Building commitment and capacity is an integral part of this process.

2.

TeamUP was developed to expand the benefits of objectives-oriented project planning and to make it more accessible for institution-wide use.

PC/TeamUP is a software package, which automates the basic step-by-step methodology and guides stakeholders through research, project design, planning, implementation, and evaluation.

Diunduh dari : http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

.

WHAT IS IMPACT ASSESSMENT?

.

In its broadest sense, impact assessment is the process of identifying the anticipated or actual impacts of a development intervention, on those social, economic and environmental factors which the intervention is designed to affect or may inadvertently affect. It may take place before approval of an intervention (ex

ante), after completion (ex post), or at any stage in between.

Ex ante assessment forecasts potential impacts as part of the planning, design and approval of an intervention.

Ex post assessment identifies actual impacts during and after implementation, to enable corrective action to be taken if necessary, and to provide information for improving the design of future interventions. .

Diunduh dari: www.sed.manchester.ac.uk/.../CoreText-1-Wh... ….. 16/9/2012

.

External impact assessment,

.

External impact assessment, often involving independent investigators.

Such assessments produce reports for specific purposes, such as

poverty impact assessment, regulatory impact assessment, social

impact assessment or health impact assessment.

Certain types of ex ante assessment may be part of the approval process for certain types of intervention, including environmental

impact assessment and economic impact assessment (cost-

benefit analysis). These may contain their own ex post monitoring activities.

Separate ex post assessments may be undertaken or commissioned for any particular intervention or set of interventions, to provide fuller information than may be available from routine monitoring and evaluation.

.

Diunduh dari: www.sed.manchester.ac.uk/.../CoreText-1-Wh ….. 16/9/2012

. Impact Assessment methods .

Quantitative statistical methods

involving baseline studies, the precise identification of baseline conditions, definition of objectives, target setting, rigorous performance evaluation and outcome measurement.

Such methods can be costly, limited in the types of impacts which can be accurately measured, and may pose difficulties for inference of cause and effect.

Some degree of quantification may be necessary in all impact assessments, in order to evaluate the success of the intervention and the magnitude of any adverse effects.

.

Diunduh dari: www.sed.manchester.ac.uk/.../CoreText-1-Wh ….. 16/9/2012

. Impact Assessment methods .

QUALITATIVE METHODS

suitable for investigating more complex and/or sensitive types of social impacts, e.g. intrahousehold processes, policy issues and investigation of reasons for statistical relationships and policy implications.

These methods generally require high levels of skill, and may be relatively costly.

Some degree of qualitative interpretation may be necessary in all impact assessments, in order to evaluate the causes of impacts which have been observed.

.

Diunduh dari: www.sed.manchester.ac.uk/.../CoreText-1-Wh ….. 16/9/2012

. Impact Assessment methods .

PARTICIPATORY APPROACHES suitable for initial definition or refinement of the actual or potential impacts which are of concern to stakeholders, questions to be asked, and appropriate frameworks and indicators to be used.

Such approaches can contribute to all types of assessment, and are particularly suited to exploratory low budget assessments and initial investigation of possible reasons for observed statistical relationships. They offer a means of involving stakeholders in the research, learning and decision-making processes. These methodologies also require a certain level of skill, depending on the issues to be addressed and ways in which they are integrated with other methods.

Some degree of stakeholder participation is likely to be necessary in all impact assessments, in order to achieve a good understanding of stakeholder perceptions of impacts.

.

Diunduh dari: www.sed.manchester.ac.uk/.../CoreText-1-Wh ….. 16/9/2012

.

Socio-economic impact assessment studies

.

Socio-economic impact assessment studies typically investigate the impacts of a systems for a future time horizon.

These prospective studies make use of an ex-ante impact assessment, often based on literature review, simulation work and expert estimation.

They are often comprehensive in scope but they do not involve, or only to a limited extent, data from real-life conditions.

The socio-economic impact assessment investigates the impacts of a technology on society. Ideally a socio-economic impact assessment provides the decision maker with relevant information in a concise format. The relevant comparison is between the benefits and costs between a base case.

Diunduh dari: http://wiki.fot-net.eu/index.php?title=FESTA_handbook_SocioEconomic_Impact ….. 16/9/2012

.

Scope of the imapcts within socio-economic impact assessment

.

Diunduh dari: http://wiki.fot-net.eu/index.php?title=FESTA_handbook_SocioEconomic_Impact ….. 16/9/2012

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

The Conduct of Applied Research.

Planning Execution

Stage I Stage II Stage III Stage IV

Definition Design/plan Implementation Reporting/ follow-up

Stage I of the research process starts with the researcher’s development of an understanding of the relevant problem or societal issue. This process involves working with stakeholders to refine and revise study questions to make sure that the questions can be addressed given the research conditions (e.g., time frame, resources, and context) and can provide useful information.

After developing potentially researchable questions, the investigator then moves to Stage II —developing the research design and plan. This phase involves several decisions and assessments, including selecting a design and proposed data collection strategies.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Applied Research Planning.

Stage I. Research Definition

Stage II. Research

Design/plan

Choose design/data collection approaches

Understand the problem

Identify questions

Determine trade-offs Inventory resources

Refine/revise questions

Assess feasibility

To execution

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

The Research Problem:

Strategies that can be used in gathering the needed information include the following:

1. Review relevant literature (research articles and reports, transcripts of legislative hearings, program descriptions, administrative reports, agency statistics, media articles, and policy/position papers by all major interested parties);

2. Gather current information from experts on the issue (all sides and perspectives) and major interested parties;

3. Conduct information-gathering visits and observations to obtain a real-world sense of the context and to talk with persons actively involved in the issue;

4. Initiate discussions with the research clients or sponsors (legislative members; foundation, business, organization, or agency personnel; and so on) to obtain the clearest possible picture of their concerns; and

5. If it is a program evaluation, informally visit the program and talk with the staff, clients, and others who may be able to provide information on the program and/or overall research context.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Developing the Conceptual Framework

Every study, whether explicitly or implicitly, is based on a conceptual framework or model that specifies the variables of interest and the expected relationships between them. In some studies, social and behavioral science theory may serve as the basis for the conceptual framework.

Other studies, such as program and policy evaluations, may be based not on formal academic theory but on statements of expectations of how policies or programs are purported to work.

The framework may be relatively straightforward or it may be complex, as in the case of evaluations of comprehensive community reforms, for example, that are concerned with multiple effects and have a variety of competing explanations for the effects (e.g., Rog &

Knickman, 2004).

1.

Rog, D. J., & Knickman, J. (2004). Strategies for comprehensive initiatives. In M. Braverman, N. Constantine, &

J. Slater (Eds.), Foundations and evaluations: Contexts and practices for effective philanthropy (pp. 223 –235).

San Francisco: Jossey-Bass.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Identifying the Research Questions

As noted in the introduction to this Handbook, one of the major differences between basic research and applied research is that the basic researcher is more autonomous than the applied researcher.

Basic research, when externally funded, is typically conducted through a relatively unrestricted grant mechanism; applied research is more frequently funded through contracts and cooperative agreements.

Even when applied research is funded through grant mechanisms, such as with foundations , there is usually a “client” or sponsor who specifies (or at least guides) the research agenda and requests the research results. Most often, studies have multiple stakeholders: sponsors, interested beneficiaries, and potential users.

The questions to be addressed by an applied study tend to be posed by individuals other than the researcher, often by nontechnical persons in non-technical language.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

.

Clarifying the Research Questions

In discussing the research agenda with clients, the researcher will usually identify several types of questions.

For example, in a program evaluation, researchers are frequently asked to produce comprehensive information on both the implementation (“what actually is taking or took place”) and the effects (“what caused what”) of an intervention.

When the research agendas are broad such as those in the example, they pose significant challenges for planning in terms of allocating data collection resources among the various study objectives.

It is helpful to continue to work with the sponsors to further refine the questions to both more realistically plan the scope of the research and to also ensure that they are specific enough to be answered in a meaningful way and one that is agreed on by the clients.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Negotiating the Scope of a Study

Communication between the researcher and stakeholders (the sponsor and all other interested parties) is important in all stages of the research process. To foster maximum and accurate utilization of results, it is recommended that the researcher regularly interact with the research clients —from the initial discussions of the

“problem” to recommendations and follow-up.

In the planning phase, we suggest several specific communication strategies. As soon as the study is sponsored, the researcher should connect with the client to develop a common understanding of the research questions, the client’s time frame for study results, and anticipated uses for the information. The parties can also discuss preliminary ideas regarding a conceptual model for the study. Even in this initial stage, it is important for the researcher to begin the discussion of the contents and appearance of the final report. This is an opportunity for the researcher to explore whether the client expects only to be provided information on study results or whether the client anticipates that the researcher will offer recommendations for action. It is also an opportunity for the researcher to determine whether he or she will be expected to provide interim findings to the client as the study progresses.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Stage II: The Research Design

The design serves as the architectural blueprint of a research project, linking design, data collection, and analysis activities to the research questions and ensuring that the complete research agenda will be addressed.

A research study’s credibility, usefulness, and feasibility rest with the design that is implemented.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Credibility refers to the validity of a study and whether the design is sufficiently rigorous to provide support for definitive conclusions and desired recommendations.

Credibility is also, in part, determined by who is making the judgment.

To some sponsors, a credible project need only use a pre-post design. Others may require a randomized experimental design to consider the findings credible.

Credibility is also determined by the research question.

A representative sample will make a descriptive study more credible than a sample of convenience or one with known biases. In contrast, representativeness is not as important in a study designed to determine the causal link between a program and outcomes.

The planner needs to be sure that the design matches the types of information needed.

For example, under most circumstances, the simple pre-post design should not be used if the purpose of the study is to draw causal conclusions.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Usefulness refers to whether the design is appropriately targeted to answer the specific questions of interest.

A sound study is of little use if it provides definitive answers to the wrong questions.

Feasibility refers to whether the research design can be executed, given the requisite time and other resource constraints.

All three factors —credibility, usefulness, and feasibility— must be considered to conduct high-quality applied research.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Design Dimensions

Maximizing Validity

Four types of validity are typically considered in the design of applied research

(Bickman, 1989; Shadish, Cook, & Campbell, 2002).

1. Internal validity: the extent to which causal conclusions can be drawn or the degree of certainty that “A” caused “B,” where A is the independent variable (or program) and B is the dependent variable (or outcome).

2. External validity: the extent to which it is possible to generalize from the data and context of the research study to other populations, times, and settings (especially those specified in the statement of the original problem/issue).

3. Construct validity: the extent to which the constructs in the conceptual framework are successfully operationalized (e.g., measured or implemented) in the research study. For example, does the program as actually implemented accurately represent the program concept and do the outcome measures accurately represent the outcome? Programs change over time, especially if fidelity to the program model or theory is not monitored.

4. Statistical conclusion validity: the extent to which the study has used appropriate sample size, measures, and statistical methods to enable it to detect the effects if they are present. This is also related to the statistical power.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Choosing a Design

There are three main categories of applied research designs: descriptive, experimental, and quasi-experimental.

In our experience, developing an applied research design rarely allows for implementing a design straight from a textbook; rather, the process more typically involves the development of a hybrid, reflecting combinations of designs and other features that can respond to multiple study questions, resource limitations, dynamics in the research context, and other constraints of the research situation (e.g., time deadlines).

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Descriptive Research Designs

Description and Purpose.

The overall purpose of descriptive research is to provide a “picture” of a phenomenon as it naturally occurs, as opposed to studying the effects of the phenomenon or intervention. Descriptive research can be designed to answer questions of a univariate, normative, or correlative nature

—that is, describing only one variable, comparing the variable to a particular standard, or summarizing the relationship between two or more variables.

Key Features.

Because the category of descriptive research is broad and encompasses several different types of designs, one of the easiest ways to distinguish this class of research from others is to identify what it is not: It is not designed to provide information on cause-effect relationships.

Variations.

There are only a few features of descriptive research that vary. These are the representativeness of the study data sources (e.g., the subjects/entities)

—that is, the manner in which the sources are selected (e.g., universe, random sample, stratified sample, nonprobability sample); the time frame of measurement —that is, whether the study is a one-shot, cross-sectional study, or a longitudinal study; whether the study involves some basis for comparison (e.g., with a standard, another group or population, data from a previous time period); and whether the design is focused on a simple descriptive question, on a normative question, or on a correlative question.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Descriptive Research Designs

When to Use.

A descriptive approach is appropriate when the researcher is attempting to answer “what is,” or

“what was,” or “how much” questions.

Strengths.

Exploratory descriptive studies can be low cost, relatively easy to implement, and able to yield results in a fairly short period of time. Some efforts, however, such as those involving major surveys, may sometimes require extensive resources and intensive measurement efforts. The costs depend on factors such as the size of the sample, the nature of the data sources, and the complexity of the data collection methods employed.

Limitations.

Descriptive research is not intended to answer questions of a causal nature. Major problems can arise when the results from descriptive studies are inappropriately used to make causal inferences —a temptation for consumers of correlational data.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Experimental Research Designs

Description and Purpose.

The primary purpose in conducting an experimental study is to test the existence of a causal relationship between two or more variables. In an experimental study, one variable, the independent variable, is systematically varied or manipulated so that its effects on another variable, the dependent variable, can be measured. In applied research, such as in program evaluation, the

“independent variable” is typically a program or intervention (e.g., a drug education program) and the “dependent variables” are the desired outcomes or effects of the program on its participants

(e.g., drug use, attitudes toward drug use).

Key Features.

The distinguishing characteristic of an experimental study is the random assignment of individuals or entities to the levels or conditions of the study. Random assignment is used to control most biases at the time of assignment and to help ensure that only one variable

—the independent

(experimental) variable

—differs between conditions. With well-implemented random assignment, all individuals have an equal likelihood of being assigned either to the treatment group or to the control group. If the total number of individuals or entities assigned to the treatment and control groups is sufficiently large, then any differences between the groups should be small and due to chance.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Experimental Research Designs

Variations.

The most basic experimental study is called a post-only design, in which individuals are randomly assigned either to a treatment group or to a control group, and the measurement of the effects of the treatment is conducted at a given period following the administration of the treatment. There are several variations to this simple experimental design that can respond to specific information needs as well as provide control over possible confounds or influences that may exist. Among the features that can be varied are the number and scheduling of posttest measurement or observation periods, whether a preobservation is conducted, and the number of treatment and control groups used. The post-only design is rarely used because faulty random assignment may result in the control and treatment groups not being equivalent at the start of the study. Few researchers are that (over) confident in the implementation of a field randomized design to take the chance that the results could be interpreted as being caused by faulty implementation of the design.

When to Use.

An experimental study is the most appropriate approach to study cause-effect relationships. There are certain situations that are especially conductive to randomized experiments when random assignment is expected (i.e., certain scarce resources may already be provided on a “lottery” or random basis), when demand outstrips supply for an intervention, and when there are multiple entry groups over a period of time.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Applied Research Design: A Practical Approach

Leonard Bickman and Debra J. Rog.

Experimental Research Designs

Strengths.

The overwhelming strength of a randomized experiment is its control over threats to internal validity —that is, its ability to rule out potential alternative explanations for apparent treatment or program effects. This strength applies to both the variables that are measured and, more important, the variables that are not measured and, thus, are unknown to the researcher but continue to be controlled by the design.

Limitations.

Randomized experiments can be difficult to implement with integrity, particularly in settings where the individuals responsible for random assignment procedures lack research training or understanding of the importance of maintaining compliance with the research protocol.

In addition, random assignment does not control for all biases such as participant preference for one condition over the other or local history where some external event occurs for one group but not for the other.

Diunduh dari: https://docs.google.com/viewer?a=v&q=cache:7yA4PEJPFxkJ:www.sagepub.com/upm-data /….. 26/1/2013

Download