L C 2 Learning to Collaborate

advertisement
L 2C
Learning to Collaborate
Learning to Collaborate
Continuous Evaluation of the Outputs and Systems
Developed
(WP 6)
Chiara Frigerio, UCSC
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
1
Structure of the Presentation
1.
The Evaluation Process
2.
First Round Evaluation
a. The Knowledge Community Evaluation
b. The Prototypes’ Evaluation
3.
Evaluation Process: Next Steps
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
2
Evaluation Process Objectives
1.
2.
3.
4.
Guide continuous improvements to the development of outputs of
the project, including testing and validating their effectiveness for the
users.
Represent a good indicator of performance, whereby the
identification of challenges and needs can be used to improve
potential future opportunities of collaboration.
Represent the criteria for quality assurance of the outputs, verifying
also accomplishment of goals.
Provide assessment of the partners’ effort invested in the processes
of innovation and new knowledge creation for gauging the value
and effectiveness of their efforts.
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
3
Output to be evaluated
WORKPACKAGE
EXPECTED OUTPUT
1 – Knowledge Harvesting
and Integration
A comprehensive and integrated collection of
collaboration-related dynamics, models and insights
derived from a process of literature review.
2 – ACDT Knowledge
Management Tools
Development
 A structured knowledge base (knowledge
management tool)
 A virtual learning community
3 – ACDT Framework and
Simulation Games Prototype
Development
 Framework adapted to the design, development
and deployment of simulations
 First version of a simulation game
4 – Pilots
Final release version of each one of the IT-based tools
developed in Workpackages 2 and 3, as improved
based on the feedback obtained from the pilots and
prototyping cycles applied
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
4
The Evaluators
•
•
•
•
•
L2C Project partners, who care about the innovation being introduced
and about its effectiveness because of the efforts invested.
Target users of the project outputs, who represent the intended
beneficiaries and users of the research findings.
Potential future buyers of the projects outputs, who have an interest on
the project’s success in delivering intended results (for examples,
organizations or people whose opinion on the output is essential for its
adoption in an organizational context).
Strategic partners whose feedback in terms of knowledge is essential for
the project improvements (for example, practitioners or scientists with
expert knowledge on a specific topic).
The European Community, which will provide independent evaluators
and reviewers to assess the project’s outputs.
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
5
Evaluation Methodology
The main reference for this project is the Goal Question Metric (GQM)
approach by Basili and Weiss (1984). This framework is based on the
assumption that to measure a project’s effectiveness in a purposeful way it is
essential to first specify the goals to be accomplished.
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
6
Evaluation Framework/1
Level of
investigation
Purpose
Conceptual
Each goal aims at identifying the issue or viewpoint to be
evaluated. It has three main coordinates:
Object: what is being analysed
Issue: the object’s property to be analysed
Viewpoint: who uses the collected feedback
Purpose: why is the object analysed
CRITERIA (OR
QUESTIONS
ASKED)
Operational
Each criterion represents a refinement of the goal it refers to,
adopting an operational perspective. This allows a better
interpretation and description of the conceptual issue.
METRICS
Measurement
Set of objective or subjective data associated with every question,
in order to answer it in a quantitative and measurable way.
GOALS
To illustrate: Suppose one of the main objectives/goals of the Knowledge Community is to be usable. In this case, suitable
criterion and metrics to be presented would be:
• How intuitive is it for a user to find a contribution? 1(=very intuitive) to 5 (=not intuitive at all)?
• How understandable understandable are the menus? 1(=very understandable) to 5 (=not
understandable at all).
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
7
Evaluation Framework/2
WORKPACKAGE
GUIDELINES FOR IDENTIFYING EVALUATION CRITERIA
•
1 – Knowledge Harvesting
and Integration
2 – ACDT Knowledge
Management Tools
Development
3 – ACDT Framework and
Simulation Games Prototype
Development
•
The body of knowledge collected supports effective development of the successive workpackages, i.e.
operationalizable.
The output of WP 1 provides guidelines for added value creation of the IT-based tools facilitating
collaborative practices.
The structure, interface and features of the Knowledge Management Tools are designed to:
• Fit the know-how collected in WP1
• Facilitate access to the know-how stored
• Facilitate restructuring of the knowledge base
• Facilitate new contributions
• Facilitate connections among members
• Facilitate active participation among members
•
•
•
•
•
•
4 – Pilots
•
The ACDT framework supports the creation of effective collaboration-related simulations (in terms of
providing an array of scenarios, characters, dynamics, interventions, etc).
The game meets its objectives of addressing advanced collaboration dynamics.
The game is innovative and improves on existing learning solutions.
The simulation pilots consider realistic scenarios of collaboration.
The pilots address important collaboration-related dynamics, breakdowns and interventions.
In the second version of the ACDT Knowledge Base and Virtual Learning Community, re-design and
extensions have been made to expand the access of the tools to a more heterogeneous population (those
not part of theL2C Consortium).
Feedback from the evaluation loops has been implemented to ensure the tools developed are effective and
sustainable beyond the duration of the project.
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
8
Evaluation Perspectives
The evaluation of the IT-based tools will be conducted along two
dimensions.
• A technical/technological perspective which will investigate ITrelated dimensions such as usability, functionalities, security and
effectiveness of the tools.
• A pedagogical and social perspective which focuses on factors
such as value to and level of acceptance by the users. Users
represent relevant actors who will contribute to continuous
improvement of the outputs and will also provide final feedback on
the quality and learning value of the tools.
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
9
Overall Evaluation Process
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
10
Overall Evaluation Approach
FORMATIVE
EVALUATION
SUMMATIVE
EVALUATION
Knowledge Harvesting
State of the Art Report
STRATEGIC
EVALUATION
• Questionnaire
ACDT Framework
• Questionnaire
ACDT Dynamic
Knowledge Base
• Questionnaire
• Focus group
• Questionnaire
• Log analysis
ACDT Virtual Learning
Community
• Questionnaire
• Focus group
• Questionnaire
• Log analysis
• Questionnaire
ACDT Simulation Game • Think aloud method
• Focus group
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
• Questionnaire
• Interviews
Q
u
e
s
t
i
o
n
n
a
i
r
e
11
Deadlines
WORKPACKAGE
PROJECT’S
MONTH OF
EXPECTED
SUMMATIVE
EVALUATION
DATE OF
EXPECTED
SUMMATIVE
EVALUATION
Evaluation of Workpackage 1 (Knowledge
Harvesting material as represented in the KB and
ACDT Framework)
Month 14
February 2007
Evaluation of Workpackage 2 (Knowledge
Management Tool and Virtual Community)
Month 12
February 2007
Evaluation of Workpackage 3 (Game Prototype
and validity of ACDT Framework)
Month 14
April 2007
Evaluation of Workpackage 4 (Pilots)
Month 23
January 2008
Strategic evaluation
Month 23
January 2008
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
12
Expected Deliverables
DELIVERABLE
DATE OF EXPECTED
DELIVERY
D 6.1 Evaluation Plan
Done
D 6.2 Technical and Pedagogical Evaluation
Criteria and Metrics
Done
D 6.3 Evaluation Report of the First Prototype
Done
D 6.4 Evaluation Report of the Final Prototype
Month 18
D 6.5 Evaluation Report of Simulation Games
prototypes
Month 22
D 6.6 Project’s Assessment / Comparative
Analysis (Final Report)
Month 24
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
13
Structure of the Presentation
1.
The Evaluation Process
2.
First Round Evaluation
a. The Knowledge Community Evaluation
b. The Prototypes’ Evaluation
3.
Evaluation Process: Next Steps
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
14
Deliverable 6.3
D6.3 provides the formative assessment of the first version of
the following outputs:
•
The ACDT Knowledge Management Tools from
technical/technological perspective (from Workpackage 2) –
Period of evaluation: February 2006.
•
The evaluation of the specification of the first version of the
ACDT Simulation Games Prototypes, from a pedagogical point
of view as presented in D3.1 “ACDT Framework, Simulation
Scenarios and Design” (from Workpackage 3) – Period of
evaluation: February 2006.
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
15
Structure of the Presentation
1.
The Evaluation Process
2.
First Round Evaluation
a. The Knowledge Community Evaluation
b. The Prototypes’ Evaluation
3.
Evaluation Process: Next Steps
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
16
The Knowledge Community
Evaluation/1
WHAT:
first prototype of the system (programming errors, technical
problems, usability and navigation issues)
WHEN: August 2006
WHO: pool of experts comprised of FVA and INSEAD internal
usability experts and usability free-lance experts hired by
FVA
HOW:
report of tickets, usability protocol submission and thinkaloud method
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
17
The Knowledge Community
Evaluation/2
WHAT:
first prototype of the system after technical
improvements
WHEN: September 2006, 2nd Consortium meeting in Milan
WHO: L2C Consortium partners
HOW:
informal suggestions and brainstorming
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
18
The Knowledge Community
Evaluation/3
WHAT:
first formal assessment on the technical/technological
features and functionalities
WHEN: February 2007, after a one month period of usage
WHO: by the Consortium partners
HOW:
questionnaire composed of 44 questions in total, among
which 5 were open questions, 26 were questions based on
Likert scale answers and 13 were yes/no questions.
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
19
System Features Evaluated
Virtual
Learning
Community
Knowledge
Community
Knowledge
Base
Usability
X
X
Simplicity and Clarity
X
X
X
Orientation and Navigation
X
X
X
Learnability
X
Effective Communication
X
Text Layout
X
User Control and Freedom
X
Interface
X
X
X
System Features
Completeness
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
20
The Knowledge Community:
Qualitative Evaluation
A number of areas were identified for improvement, especially making the community
functions easier to use for members who do not belong to the L2C network. The
following are the main features which need to be improved:
•
•
•
•
•
•
The left menu, which right now imposes a long scrolling down to various sources of
information makes the home or main page appear too overloaded with information. A
more selective way to filter and present relevant information is needed.
Some online instructions for new users who will not be familiar with the purpose
and logic of the L2C project. First impression management is important.
The contributions’ editing functionality.
A layout or interface which uses more contrasting and intense colors.
Additional search and better organization functions, since the contents of the
knowledge community will increase over time.
Some additional collaboration tools in the virtual community area could be inserted
to provide easy to use and accessible communication and collaboration opportunities.
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
21
Knowledge Community Overview
The assessment of the Knowledge Community shows that in
general the partners are satisfied with the tool and its
functionalities. According to their opinions, this is a good starting
point since the community has the relevant collaboration tools, it is
pretty intuitive, and simple to be used.
The suggestions and feedback provided will be used to drive
future improvements to the knowledge community from a
technical point of view. They will be discussed among partners to
decide on how to best address each specific issue, and in
particular, to determine a list of priorities.
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
22
Structure of the Presentation
1.
The Evaluation Process
2.
First Round Evaluation
a. The Knowledge Community Evaluation
b. The Prototypes’ Evaluation
3.
Evaluation Process: Next Steps
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
23
The Prototypes Evaluation
WHAT: version 1 of each of the simulation games prototypes
WHEN: February 2007
WHO: by the Consortium partners, based on their expertise and
interest, as determined during the 3rd Consortium meeting
in Athens in January 2007.
HOW: questionnaire composed of 15 questions in total, among
which 1 was an open-ended question and 14 were questions
using Likert scale answers. Additionally, 6 of the 15
questions were intended to evaluate the prototypes in
general, while the other 9 assessed specific dimensions of
each prototype.
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
24
EduSynergy
The overall assessment has showed that the EduSynergy prototype
presents a learning experience which provides players an opportunity to
come into contact with a number of collaboration challenges and dynamics
at an organizational level. However, there are still are some dimensions
which need to be clarified and improved:
•
Transferability of the Edusynergy scenario to a wider, nonUniversity/academic audience
•
The need to fine-tune the collaboration focus of the simulation
•
Addressing the complexities of intra-organizational collaboration
•
Realism of the player role
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
25
World Team
The overall assessment indicates that the World Team prototype
presents detailed key learning points, which consider all
organisational, group and personal dynamics. Areas of improvement
would be:
• Acquisition strategy familiarity
• Further developments to the scenario
• Inclusion of a performance indicator to track progress
• Reccomandation for controlling communication opportunities within
the simulation
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
26
Pit Stop
The overall assessment has showed that the Pit Stop design
specification is well described and detailed. It provides a good starting
experience for discussing distinctive individual and team behaviours
and competences for high performing teams, and to extend the
discussion to team practices and performance within each
participant’s organization/division. The suggestions for improvement
refer to:
• Emphasis on the time factor
• Qualification of key learning points
• More emphasis on the theories on stress management
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
27
Eagle Racing
The overall assessment has showed that the Eagle Racing
prototype is well described and addresses interesting challenges.
Further improvements should consider the following suggestions:
• Prioritizing long list of learning points
• Taking into account the complexity of decision making process
and goals
• Avoid extreme stereotypes
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
28
Intermediary Agent
The overall assessment has showed that the Intermediary Agent
prototype is quite good described, but still there are some
opportunities of improvement concerning the following:
• Identity of the intermediary agent
• Information asymmetry
• Change and collaboration dynamics
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
29
Overall Assessment of The Five
Prototypes
The prototypes address a broad spectrum of models,
collaborative breakdowns etc. However in some places it is
necessary to make further specifications, as suggested by the
partners and concerning the following:
• Need for an overall model of learning objectives
• Identification of “top” theories
All the improvements previously suggested will be validated with
partners during the piloting phase, Workpackage 4.
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
30
Structure of the Presentation
1.
The Evaluation Process
2.
First Round Evaluation
a. The Knowledge Community Evaluation
b. The Prototypes’ Evaluation
3.
Evaluation Process: Next Steps
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
31
Evaluation Process: upcoming
activities
•
•
•
•
•
Summative pedagogical evaluation of the Knowledge
Community (final users)
Summative technological evaluation of the Knowledge
Community (after improvements from first round evaluation)
Pedagogical evaluation of the final simulation prototypes
(partners)
Formative technical evaluation of the final simulation
prototypes (technical experts and eventually partners)
The possibility to perform the proposed evaluation activities
depends on the WPs progress.
Review Meeting – INSEAD, Fontainebleau – 30 March 2007
32
Download