evaluation report: template

advertisement

E U R O P E A N S O U T H E R N O B S E R V A T ORY

Organisation Européenne pour des Recherches Astronomiques dans l'Hémisphère Austral

Europäische Organisation für astronomische Forschung in der südlichen Hemisphäre

ESO - EUROPEAN SOUTHERN OBSERVATORY

PROJECT NAME

Evaluation report:

<tool name or scope>

VLT-INS-ESO-19000-3232

Issue 3

15 April 2020

20 pages

Prepared:

Author’s Name

Name Date Signature

SQA visa: Karim Haggouchi

Name

Approved: Michele Peron

Name

Released: Peter Quinn

Name

Date

Date

Date

Signature

Signature

Signature

2

3

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

Issue

1

VLT-INS-ESO-19000-3232

3

15/04/2020

2 of 20

CHANGE RECORD

Date Affected Paragraph(s)

15/04/2020

3

All

15/04/2020 All

24/12/2004 7.1, 8.1, 9

Reason/Initiation/Remarks

Initial release

Comments from O.Chuzel

Comments from SEG

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

3 of 20

TABLE OF CONTENTS

1.

SCOPE ................................................................................................................................................................................. 5

1.1

Applicable Documents ................................................................................................................................................. 5

1.2

Reference Documents .................................................................................................................................................. 5

1.3

List of Abbreviations/Acronyms .................................................................................................................................. 6

2.

EVALUATION METHOD ................................................................................................................................................. 6

2.1

concepts ....................................................................................................................................................................... 6

2.2

Rules and recommendations ........................................................................................................................................ 9

3.

RATIONALE .................................................................................................................................................................... 11

4.

SELECTED TOOL(S) ....................................................................................................................................................... 11

5.

HOW TO DEFINE THE EVALUATION CRITERIA? .................................................................................................... 11

6.

LIST OF EXPECTED CRITERIA .................................................................................................................................... 11

6.1

Functional criteria ...................................................................................................................................................... 11

6.1.1

description .......................................................................................................................................................... 11

6.1.2

justification ........................................................................................................................................................ 12

6.2

Non-functional criteria ............................................................................................................................................... 12

6.2.1

description .......................................................................................................................................................... 12

6.2.2

justification ........................................................................................................................................................ 13

6.3

Test cases ................................................................................................................................................................... 13

6.3.1

description .......................................................................................................................................................... 13

6.3.2

justification ........................................................................................................................................................ 13

6.4

The weights ................................................................................................................................................................ 13

6.4.1

values ................................................................................................................................................................. 13

6.4.2

justification ........................................................................................................................................................ 14

7.

THEORETICAL EVALUATION RESULTS ................................................................................................................... 14

7.1

Compliance scores ..................................................................................................................................................... 14

7.2

Scoring justification ................................................................................................................................................... 15

7.3

Best tools from a theoretical point of view ................................................................................................................ 15

8.

PRACTICAL EVALUATION RESULTS ........................................................................................................................ 16

8.1

Test cases execution results ....................................................................................................................................... 16

8.2

Criteria compliance scores ......................................................................................................................................... 16

8.3

Scoring justification ................................................................................................................................................... 17

8.4

Best tools from a practical point of view ................................................................................................................... 17

9.

OVERALL SCORING ...................................................................................................................................................... 18

10.

CONCLUSION .............................................................................................................................................................. 18

A.

Annex 1 ..................................................................................................................................................... 19

B.

Annex 2 ..................................................................................................................................................... 20

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

LIST OF TABLES

LIST OF FIGURES

VLT-INS-ESO-19000-3232

3

15/04/2020

4 of 20

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

5 of 20

1.

SCOPE

This document describes the criteria for selecting <give the scope or general purpose of the tool(s) to be evaluated> , the decision-aid method which has been applied and the related evaluation results, in particular after running trials.

<Explain in a few words what is the purpose of the tool(s): general overview, what is it for?,to which software life-cycle phase does it apply, … ?>

The process which is described in this document is aimed to formally compare a set of tools having the same purpose in order to make the right choice for the project, but also to justify the final choice and keep trace of every major decision. It is not specific to a particular category of tools or to the context of

DFS. It can be even applied whenever it is necessary to select a programming language or a new operating system, for instance.

In order to formally be applied, this process requires some resources and time which have to be considered and dedicated by concerned project managers. It can be performed by any engineer, developer or any project member, usually with the assistance or even the supervision of the Software

Quality Assurance manager.

<The current template may also be used even if only one tool is to be evaluated as such (i.e. even if there is no comparison with other tools having the same purpose). In such case, specific instructions are given.>

1.1

Applicable Documents

Reference Document

Number

Issue

AD1

AD2

VLT-INS-ESO-

19000

ESO

-

<chrono>

VLT -<TYPE> -

-<PSC

<chrono>

>-

3

<issue number>

Date Title

24/01/2004 Evaluation report template

<date of release>

<title of the document>

1.2

Reference Documents

Reference Document

Number

Issue

RD1

RD2

Groupe de methodologie appliquée

Theory and

Date

1992

1991

Title

ADEQUA, Aide a la Décision de QUAlite

(Zimmerman, NASA)

The outranking approach and the foundation of

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

6 of 20

RD3 decision, #31

VLT -<TYPE> -

ESO -<PSC >-

<chrono>

<issue number>

<date of release>

ELECTRE methods, Bernard Roy.

<title of the document>

1.3

List of Abbreviations/Acronyms

ALMA Attacama Large Millimeter Array

AR Action Request (Remedy tool)

DFS Data Flow System

DMD Data Management Division

ESO European Southern Observatory

FITS Flexible Image Transport System

GUI Graphical User Interface

IT Information Technology

OTS Operations Technical Support group of the ESO/Data Management Division

TBC To be confirmed

TBD To be defined

VLT Very Large Telescope

2.

EVALUATION METHOD

2.1

concepts

A specific evaluation method has been applied to evaluate the tool(s). The goal of this formal method is to assess functional and non-functional criteria:

First, by doing a theoretical analysis based on different means such as any available documentation, searches on the web or questionnaires sent to suppliers.

Second, by running practical trials on a subset of the tools.

Functional criteria correspond to functionality or features to be addressed by the tool(s). Such criteria are usually specific to the tool scope. For instance, unit test tools may not have the same functional criteria as integration test tools, or more precisely, some criteria can be common to several categories of tools while others are really specific to a certain category.

Non-functional criteria are related to operational requirements, e.g. the fact that a tool is easy to install and configure, or whether it can be a long-term solution or not.

This method encompasses 5 phases to be performed exactly in the following sequence:

1.

Define criteria and test cases o Define both functional and non-functional criteria.

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

7 of 20 o Give a code to every functional criterion ( FCR i

) and to every non-functional one ( NFCRj ) o Define test cases to be executed during the practical trials (phase 4) to cover some selected criteria.

2.

Rank the criteria and assess weights o Give a grade of importance to every criterion (functional and non-functional ones):

FCR i

_importance and NFCR j

_importance

The range of grade values is 1 to 5, value of 5 being meant for a criterion having the highest importance. o Assess weights to level-head the whole set of functional criteria against the nonfunctional criteria set: functional_weight and non-functional_weight . o Assess weights to level-head the theoretical evaluation against the practical one: theoretical_weight and practical_weight

3.

Perform the theoretical evaluation o Based on any available documentation related to the tool features (user’s guide, for instance), searches done on the web, questionnaires sent to suppliers, phone calls to suppliers, interviews with project managers or group leaders, discussions with experienced users or more generally any other means to collect information, the theoretical compliance of each criterion is scored between 0 and 3. A compliance value of 3 indicates a complete compliance, while 0 means no compliance at all:

FCR i

_theo_score and NFCR j

_theo_score

This step is to be performed on every tool to evaluate. o Then, an individual resulting score is generated for each criterion by multiplying the importance grade by the theoretical compliance score : res_theo_score_ FCR i

= FCR i

_importance X FCR i

_theo_score (functional) res_theo_score_ NFCR j

= NFCR j

_importance X NFCR j

_theo_score (non-functional)

This step is to be performed on every tool to evaluate. o Individual resulting scores are summed first on the functional criteria, then nonfunctional ones: sum_theo_score_ FCR =

( res_theo_score_ FCR i

) (functional) sum_theo_score_ NFCR =

(res_theo_score_ NFCR j

) (non-functional)

This step is to be performed on every tool to evaluate. o a final theoretical score is computed by level-heading the individual resulting score sums by respectively the functional weight and the non-functional one (see phase 2): final_theo_score = ( sum_theo_score_ FCR X functional_weight ) +

( sum_theo_score_ NFCR X non-functional_weight )

This step is to be performed on every tool to evaluate.

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

8 of 20 o The last step is to normalize the final theoretical score so that the resulting value is independent of the number of criteria: norm _ final_theo_score = [ ( sum_theo_score_ FCR X functional_weight ) +

( sum_theo_score_ NFCR X non-functional_weight ) ] /

[ ( max_theo_score_ FCR X functional_weight ) +

( max_theo_score_ NFCR X non-functional_weight ) ] where max_theo_score is the sum of the maximum score values which can be given to the criteria.

This step is to be performed on every tool to evaluate. o In that way, tools resulting from the theoretical evaluation – i.e. having the highest norm_final_theo_score – can be identified. These ones are qualified to the next qualification phase which is the practical evaluation.

The 3 first phases correspond to a theoretical assessment which is aimed to reduce the number of tools to submit to the practical trial usually because of limited resources.

4.

The practical evaluation o a trial consisting of using the tool(s) remaining after phase 3 is performed to practically assess a representative subset of the functional and non-functional criteria. In the current phase, the grades of importance remain identical to what they were when used in former phase 3.

This is done by running test cases defined in phase1. A given test case may thus cover one or several criteria. Let’s call these criteria the ‘practical criteria’. Notice that a unique score should be given to every practical criterion, even if a test case needs to be executed on several platforms or configurations.

Based on an actual use of a given tool, the practical compliance of each selected practical criterion is scored between 0 and 3. A compliance value of 3 indicates a complete compliance, while 0 means no compliance at all:

FCR i

_pract_score and NFCR j

_pract_score

This step is to be performed on every tool to evaluate. o Then, an individual resulting score is generated for each ‘practical’ criterion by multiplying the importance grade by the practical compliance score : res_pract_score_ FCR i

= FCR i

_importance X FCR i

_pract_score (functional) res_pract_score_ NFCR j

= NFCR j

_importance X NFCR j

_pract_score (non-functional)

This step is to be performed on every tool to evaluate. o Individual resulting scores are summed first on the functional criteria, then nonfunctional ones: sum_pract_score_ FCR =

( res_pract_score_ FCR i

) (functional) sum_pract_score_ NFCR =

(res_pract_score_ NFCR j

) (non-functional)

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

9 of 20

This step is to be performed on every tool to evaluate. o a final practical score is computed by level-heading the individual resulting score sums by respectively the functional weight and the non-functional one (see phase 2): final_pract_score = (

( sum_pract_score_ FCR

This step is to be performed on every tool to evaluate.

X sum_pract_score_ NFCR functional_weight

X

) + non-functional_weight ) o The last step is to normalize the final practical score so that the resulting value is independent of the number of criteria: norm _ final_pract_score = [ ( sum_pract_score_ FCR X functional_weight ) +

[ (

(

( sum_pract_score_ NFCR max_pract_score_ FCR max_pract_score_ NFCR

X

X

X non-functional_weight functional_weight ) + non-functional_weight

) ] /

) ] where max_pract_score is the sum of the maximum score values which can be given to the criteria.

This step is to be performed on every tool to evaluate.

5.

Combine theoretical and practical assessment results

The last phase consists in “combining” the theoretical and practical assessment results in order to identify the best tool. This is done in the following way, for every tool having passed the theoretical assessment (phase 3): o final_overscore = ( norm_final_theo_score X theoretical_weight ) +

( norm_final_pract_score X practical_weight )

The tool having the highest final_overscore value should be the best solution.

If only one tool was considered for the current evaluation, phase 5 is limited to general comments about the norm_final_theo_score , norm_final_pract_score and final_overscore obtained for that tool.

2.2

Rules and recommendations

Following rules and recommendations must be applied to get the best benefit of the method.

They are usually verified by the Software Quality Assurance manager, who also plays the role of moderator.

It is essential to perform the 5 phases sequentially in the given order: achieve every phase entirely before starting the next one. In that sense, it is essential to precisely define all criteria and weights (phases 1 and 2) prior to any scoring.

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

10 of 20

 There is no limit on the number of criteria to assess, anyway it’s recommended to limit it to about 20. This is to avoid to have too many criteria and consequently the evaluation to be too time-consuming. On the other hand, the number of criteria should be enough in order to give an accurate result: a minimum range is usually between 5 to 10.

As much as possible, most functional criteria should be theoretically evaluated (phase 3).

Anyway, it may happen that some functional criteria can not be theoretically evaluated because the related information is simply not available, or incomplete or even too difficult to get in the foreseen time scale. If so, such criteria should be selected by default to be assessed during the practical trials.

Non-functional criteria may sometimes be difficult to be theoretically assessed, most likely because such criteria are usually better understood by actually using the tool(s).

Test cases are run in phase 4 to assess the criteria selected for the practical trial. They need neither to get an importance grade, nor a compliance score: they are used to simply be able to score the related practical criteria.

Any criterion which grade of importance is high (4 or 5) should also be covered by the test cases executed during the practical trials.

Another important fact to mention is that there can be show-stoppers during the evaluation process. It can be during the theoretical assessment or the practical trials. A usual practice is to reject a tool having a criterion which grade of importance is 5 but a compliance score of 0. This corresponds to so-called “ destructive criteria

”.

It is clear that the result of the theoretical evaluation may be dependent from values attributed to the functional_weight and the non functional_weight . To limit discussions, various interpretations, and reach a consensus, it is essential to assess these weights in phase 2 and not modify them afterwards, and certainly not in the course of phase 3. Typical values can be: functional_weight = 0.7 non-functional_weight = 0.3

In the same way, the final evaluation (last phase 5) may be dependent from values attributed to the theoretical_weight and the practical_weight . To limit discussions, various interpretations, and reach a consensus, it is essential to assess these weights in phase 2 and not modify them afterwards, and certainly not in the course of phase 5. Typical values can be: theoretical_weight = 0.3 practical_weight = 0.7

The practical_weight should have a higher value since it comes from a real trial of the tool.

Every scoring and important decision must be justified, in particular: o The grade of importance of every criterion (phase 2) o Values given to the functional_weight and non-functional_weight (phase 2)

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

11 of 20 o Values given to the theoretical_weight and practical_weight (phase 2) o The individual compliance scores determined during the theoretical evaluation (phase 3). o Why some criteria were not or could not be assessed in phase 3 ? o The list of best tools resulting from the theoretical evaluation to be practically assessed

(phase 3) o The list of functional and non-functional criteria to be practically assessed (why some of these criteria will not be considered when running the trials ?) o The individual compliance scores resulting from the practical evaluation (phase 4) o Any show-stopper o And, of course, the final choice (phase 5)

3.

RATIONALE

<remind or explain what the historical context of the current project is. For instance, mention whether a tool has already been used for the project, and if so, recall the reasons of the original choice: what were the initial requirements to address was there a formal evaluation done to select the tool, why the current tool is not so satisfactory anymore … >

4.

SELECTED TOOL(S)

<give here the list and a quick overview of the tool(s) to be evaluated. It should be a description in a few lines. Reference to available documentation should be given in previous section 1.2

Mention and give related justification about any interesting tool which could not be actually selected.>

5.

HOW TO DEFINE THE EVALUATION CRITERIA?

<Evaluation criteria are usually defined through interviews, technical discussions, questionnaires, review of technical documents, or searches on the web. The current section will describe how it has been proceeded to get the list and definition of the functional and nonfunctional criteria. Any working documents (i.e. questionnaires and related answers) should be appended to the current document annexes.>

6.

LIST OF EXPECTED CRITERIA

6.1

Functional criteria

6.1.1

description

<list and describe the functional criteria to be addressed by the tool(s). Give a description as detailed as possible of the expected functionality and features. Depending on the tool scope, there can be from 5 to some tens of functional criteria. Be absolutely sure there is no

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

12 of 20 redundancy or overlap in the definitions. Then, give a grade of importance.

The list is not fixed but it should at least contain following entries :>

Description

Easy of use

Details

are the layout or inputs intuitive ?

what about the outputs generated by the tool ?

are there clear informative messages when the user performs a wrong action (i.e. indicating what has not been correctly done, and what should be done instead) ?

how complete, easy to understand and easy to query is the online help (if any) ?

-

Code Grade of importance

FCR1 <recommended value=5>

6.1.2

justification

<Following section comments the importance rating, i.e. the grade of every functional criterion: such grades should be given by using the complete [1-5] range as much as possible to point out the differences, and order hierarchically the criteria. The usage of value 5 should be limited to the most important criteria. >

6.2

Non-functional criteria

6.2.1

description

<list and describe the non- functional criteria to be addressed by the tool(s). Give a description as detailed as possible of the expected criteria. Depending on the tool scope, there can be from

5 to about 20 non- functional criteria. Be sure there is no redundancy or overlap in the definitions. Then, give a grade of importance.

The list is not fixed but it should at least contain following entries given hereafter.>

Description Details Code Grade of importance

NFCR1 <recommended value=4>

Installation, configuration

install and configure the tool on the expected target plat-form(s): was it straightforward ? were there good hints provided in the and start-up documentation ? any issues related to licensing ?

how to configure a database if any ?

how to get the tool correctly starting up ?

any other comment related to installation, configuration and start-up.

Documentation which documents are available (user’s manual, reference manual, installation guide ?)

Long-term solution

level of the provided documents: are they complete ? is there any tutorial ? trouble shooting guide ?

since when the tool exists ?

how frequent are the new releases ?

how many users ?

which platforms are supported ?

NFCR2

NFC3

<recommended value=4>

<recommended value=5>

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

13 of 20

Prices

what about the company which distributes the tool ? how big and how old is it ? how many customers and which ones?

are there already plans about a new forthcoming version ? if so, what are the expected new features ?

even if one needs to be rather cautious, it can be worth while to look at some discussion forums, and see if at a first glance, there are more positive reactions, than complains.

how many open bugs ? how critical are they ?

is it a free commonly distributed tool (for instance, part of GNU or SourceForge) ?

cost of a license ? for how many users ?

cost and type of support ?

how are the performances of the main expected functionality ?

NFCR4 <recommended value=4>

Performances NFCR5 <recommended value=4>

6.2.2

justification

<Following section comments the importance rating, i.e. the weight of every non- functional criterion: such weights should be given by using the complete [1-5] range as much as possible to point out the differences, and order hierarchically the criteria. The value 5 should be restricted to the most important criteria. >

6.3

Test cases

6.3.1

description

<describe test cases which will be run to practically evaluate the tool(s) (phase 4)

A test case is defined as:

- a sequence of actions or basic phases to be performed by the tool(s),

- the description of the expected result or output resulting from the execution of the test case,

- one or several criteria to be practically assessed: give a list of FR and/or NR code(s)>

Code Goal of the

TC1

TC2

TC3 test case

Details

(sequence of actions or basic steps to be

executed)

-

-

-

6.3.2

justification

<Justify why some criteria will not be covered by the test cases>

6.4

The weights

6.4.1

values

<Give the values of:

functional_weight

non-functional_weight .

Expected result or output

Codes of associated criteria

ESO

Evaluation report:

theoretical_weight

and practical_weight >

6.4.2

justification

< Justify values given to every weight>

7.

THEORETICAL EVALUATION RESULTS

7.1

Compliance scores

<Following section gives for each tool, and every criterion, a compliance score between 0 and 3

(3 being best compliance), first on functional criteria, then on the non-functional ones. Repeat the two tables for every tool.>

<TOOLi> Functional criteria

Code Description Grade of importance

(FCR i

_impor

tance)

FCR1 Easy of use <recommend ed value=5>

FCR2

<tool name or scope>

Theoretical compliance score

(FCR i

_theo_

score) sum_theo_sc ore_ FCR

Individual theoretical resulting score

(res_theo_score_

FCR i

=

FCR i

_importance X

FCR i

_theo_score )

Doc:

Issue:

Date:

Page:

3

VLT-INS-ESO-19000-3232

3

15/04/2020

14 of 20

Theoretical compliance maximum value

(FCR i

_theo_max)

3

3

3 max_theo_score_

FCR

Individual theoretical resulting maximum

(res_theo_max_ FCR i

= FCR i

_importance

X FCR i

_theo_max )

<TOOLi> Non-functional criteria

Code Description Grade of importance

(NFCR j

_importa

nce)

Theoretical compliance score

(NFCR i

_theo

_score)

Individual theoretical resulting score

(res_theo_score_

NFCR i

=

NFCR i

_importance

X

NFCR i

_theo_score )

Theoretical compliance maximum value

(NFCR i

_th

eo_max)

3

Individual theoretical resulting maximum

(res_theo_max_

NFCR i

=

NFCR i

_importance X

NFCR i

_theo_max )

NFCR1 Installation, configuration and start-up

<recommended value=4>

NFCR2 Documentation <recommended value=4>

3

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

15 of 20

NFC3 Long-term solution

NFCR4 Prices

NFCR5 Performances

<recommended value=5>

<recommended value=4>

<recommended value=4>

<recommended value=4>

3

3

3

NFCR6 Installation, configuration and start-up

3

3

3

3 sum_theo_sc ore_ NFCR max_theo_ score_

NFCR

7.2

Scoring justification

<Justify any previous compliance scores. Also, any show stopper as defined in previous section 2 should be clearly identified and justified. Comment scores of 0 or of 3, and any score given to the criteria having an importance grade of 5.>

<TOOLi>

7.3

Best tools from a theoretical point of view

<identify, justify and comment the short list of tools resulting from the theoretical evaluation which will be submitted to the practical trials>

TOOL Final theoretical score final_theo_score =

(sum_theo_score_ FCR X

functional_weight )

+

(sum_theo_score_ NFCR X

<tool_name1>

non-functional_weight

Maximum theoretical score max_theo_score =

(max_theo_score_ FCR X

functional_weight )

+

(max_theo_score_ NFCR X

non-functional_weight

<tool_name2>

<tool_name3>

<tool_name4>

RANK TOOL

1 <tool_name1>

Normalized final theoretical score norm _ final_theo_score = ( final_theo_score / max_theo_score )

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

16 of 20

2 <tool_name2>

3 <tool_name3>

4 <tool_name4>

8.

PRACTICAL EVALUATION RESULTS

8.1

Test cases execution results

Code Goal of the test case

Effective result or output

TC1

TC2

TC3

Codes of associated criteria

8.2

Criteria compliance scores

<Following section gives for each tool, and every criterion, a compliance score between 0 and 3

(3 being best compliance), first on functional criteria, then on the non-functional ones. Repeat the two tables for every tool.>

<TOOLi> Functional criteria

Code Description Grade of importance

(FCR i

_impor

tance)

Practical compliance score

(FCR i

_pract_

score)

Individual practical resulting score

(res_ pract _score_

FCR i

=

FCR i

_importance X

FCR i

_ pract _score )

Practical compliance maximum value

(FCR i

_ pract _max)

3

Individual practical resulting maximum

(res_ pract _max_

FCR i

=

FCR i

_importance X

FCR i

_ pract _max )

FCR1 Easy of use <recommend ed value=5>

FCR2 3

3

3 max_pract_score_

FCR sum_pract_s core_ FCR

<TOOLi> Non-functional criteria

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

17 of 20

Code Description Grade of importance

(NFCR j

_importa

nce)

Practical compliance score

(NFCR i

_prac

t_score)

Individual practical resulting score

(res_ pract _score_

NFCR i

=

NFCR i

_importance

X NFCR i

_ pract

_score )

Practical compliance maximum value

(NFCR i

_ pract

_max)

3

Individual practical resulting maximum

(res_ pract _max_

NFCR i

=

NFCR i

_importance X

NFCR i

_ pract _max )

NFCR1 Installation, configuration and start-up

<recommended value=4>

NFCR2 Documentation <recommended value=4>

NFC3 Long-term solution

NFCR4 Prices

NFCR5 Performances

NFCR6 Installation, configuration and start-up

<recommended value=5>

<recommended value=4>

<recommended value=4>

<recommended value=4>

3

3

3

3

3 sum_pract_s core_ NFCR

3

3

3 max_pract

_score_

NFCR

8.3

Scoring justification

<Justify any previous compliance scores. Also, any show stopper as defined in previous section 2 should be clearly identified and justified. Comment scores of 0 or of 3, and any score given to the criteria having an importance grade of 5 .>

<TOOLi>

8.4

Best tools from a practical point of view

<justify and comment the short list of tools resulting from the practical evaluation>

TOOL Final practical score final_pract_score =

(sum_ pract _score_ FCR X

functional_weight )

+

(sum_ pract _score_ NFCR X

non-functional_weight

Maximum practical score max_pract_score =

(max_ pract _score_ FCR X

functional_weight )

+

(max_ pract _score_ NFCR X

non-functional_weight

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

18 of 20

<tool_name1>

<tool_name2>

<tool_name3>

<tool_name4>

RANK TOOL

1 <tool_name1>

Normalized final practical score norm _ final_ pract _score = ( final_ pract _score / max_ pract _score )

2 <tool_name2>

3 <tool_name3>

4 <tool_name4>

9.

OVERALL SCORING

<This section applies only when several tools are evaluated and needs to be compared. Only tools having passed the theoretical evaluation should appear.>

RANK TOOL

1 <tool_name1>

Final overall score

final_overscore = ( norm_final_theo_score X theoretical_weight ) +

( norm_final_pract_score X practical_weight )

2 <tool_name2>

3 <tool_name3>

4 <tool_name4>

10.

CONCLUSION

<The winner is …

because in terms of functional and non-functional criteria, it passed both the theoretical and practical assessments>

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

19 of 20

A.

Annex 1

<To make it clear and easy to read, it may be better to put in one or several annexes the different tables and matrixes (importance of criteria, compliance score of criteria, testcases, …) and simply refer to them in the body of the current report.>

ESO

Evaluation report: <tool name or scope>

Doc:

Issue:

Date:

Page:

VLT-INS-ESO-19000-3232

3

15/04/2020

20 of 20

B.

Annex 2

<Append here any questionnaire used to get the list of criteria, and related answers (See section

5).>

Download