www.studyguide.pk GCE Advanced Level and GCE Advanced Subsidiary Level

advertisement
www.studyguide.pk
GCE Advanced Level and GCE Advanced Subsidiary Level
Paper 9348/01
Paper 1
General comments
The presentation of scripts was, as usual, generally good. There were the normal problems of candidates
tying them up too tightly which makes it difficult for Examiners to turn the pages, and there is evidence of the
use of the fashionable pastel ink pens which may look particularly nice when writing a letter, but when used for
an exam paper that is being marked in artificial light it becomes very difficult to read. Please encourage
candidates, in whatever exam, to use ordinary blue or black inks, there are very good reasons for doing so.
There did not seem to be a time problem for any candidates and no questions were identified by the
Examiners as causing cultural problems in understanding scenarios, although the different foods being sold in
the fast food restaurant in Question 9 were markedly different in some parts of the world. The Examiners did
their best to iron out any problems that there were caused by the language barrier, but reported that these
were very minor this year. Hopefully the care taken over preparing the paper, in this respect, has paid off.
Generally Centres can be proud of the sensible and serious way that their candidates have presented
themselves in this examination.
Comments on specific questions
Question 1
Part (a) proved to be a nice simple starter question. The Examiners were looking for a serial file having
records in chronological order of their arrival. However, this was unreasonable in exam conditions and
anything related to not being in a logical order was accepted.
Part (b) proved rather more tricky. The answer is a standard piece of bookwork, which it was expected,
candidates would have learned for the exam. This proved not to be the case. Centres are advised that these
file structures are standard and with standard applications which it would be sensible for the candidates to
learn.
Question 2
The question was specific about the queue being held in an array. The diagram for this is standard. There
are two acceptable methods of controlling the queue. One is to have two pointers, one to each end of the
queue, the other is to have a pointer to one end and a variable queuesize from which the computer can
calculate where the other end should be. Then there is the minutiae of precisely where the pointers point.
Any means of doing this was acceptable as long as the candidate was consistent. It was disappointing to see
how many candidates were unable to even start to produce a sensible answer. Many tried to draw the
diagram found in some text books showing the queue as a circular structure. While not arguing with the
diagram, it was obvious that those that drew it had little or no understanding of what they were drawing. This
is not surprising as the concept of a circular queue has always been considered a high level strategy, indeed
this question certainly did not require it at all. The advice is with queues, stacks, linked lists (which many
candidates drew) is to keep it very simple. This will gain most of the marks. Only attempt the more complex
ideas with your better candidates otherwise confusion will set in.
Question 3
Part (a) was simple definition work which all candidates should have found accessible. The fact that many of
them did not suggests that testing strategies need to be concentrated on. Many candidates were not able to
mention three debugging techniques in part (b). Testing with extreme data,… is not a debugging technique,
using a trace table is.
http://www.xtremepapers.net
www.studyguide.pk
Question 4
Candidates have a worrying faith in computers. Few candidates considered hardware, or the more likely,
software failures. Even these candidates failed to continue to the suggestion to duplicate hardware and
software on the flight. Most got bogged down in a discussion over whether the pilot or the machine should
have the last say. Whilst this is important it is only one point.
Question 5
Part (a) was well attempted.
Part (b) caused some trouble. Most candidates seemed desperate to give examples of the two different
language types. There are never any marks for this as Examiners cannot be expected to know all the
different languages. These types of languages have particular characteristics and it was these that were
wanted.
Question 6
Parts (a) and (b) were well done.
Too many candidates produced a sort algorithm, obviously learned off by heart for part (c). Those that
produced the merge picked up marks well, but few were able to produce the part of the algorithm outside the
loop, in other words explain what happens when one of the files is completed. Very few recognised that it
was possible that the master file could be emptied before the transaction file.
Question 7
Part (a) was the sort of question that produced a strange set of marks. If a candidate did not understand the
idea of the fetch execute cycle they got no marks, whereas those who realised they simply had to say what
happened to the registers to carry out an instruction could have got twice the number of marks available.
Part (b) was very much Centre based with some excellent responses and others leading the Examiners to
believe that some Centres had not covered this.
Question 8
Some good answers, probably arising from the fact of candidates experiencing this personally when
producing their project report.
Part (b)(i) was poorly answered. The Examiners did not want a diagram drawn, instead they asked for how
diagrams could be used. An answer on the lines of “To explain how the modules fit together” was all that
was expected.
Question 9
Too many candidates gave examples of hardware without providing an explanation of why it was needed. It
was pleasing to see a realisation among candidates that such a system would need cables and network
cards to allow communication.
Diagrams were disappointing. Too many candidates spent too much of their energies trying to replicate the
sort of diagram they had seen in a text book. The question must be “why”. Diagrams like these can be
drawn in many ways, all we wanted was something that would show the flow of data through a system and
where the data was stored. Style was irrelevant.
Too many candidates do not understand the difference between a real number and an integer- 4 very easy
marks that were not gained by the majority of candidates.
Most candidates wrote sensibly about a stock file and other files, it was something of a mystery that they did
not appear on the diagram in part (b).
www.studyguide.pk
Paper 9348/02
Paper 2
General comments
All candidates appeared to have had sufficient time to complete the paper; most candidates attempted all the
questions set. There was a wide range of marks awarded with a few very high marks and a few very low
marks. In general, the quality of language was good and most candidates made appropriate use of technical
terminology.
There was an improvement in designing and writing the algorithm for Question 9 (b). Many candidates did
not provide information in the correct format for Question 5.
Comments on specific questions
Question 1
Only better candidates related their answers to the context supplied in the question and provided valid
answers related to educational software for use as a Geography teaching aid e.g.
·
Language to be used/reading age of pupils - to ensure that pupils can understand output
·
Means of navigation - to allow pupils to access the whole system
·
Information to be imparted - aim of software will be at the right level/appropriate for certain types of
pupil
·
Hardware to be used -so that software is compatible/a suitable interface is chosen
·
Access to data sequential or system decides on order or ‘dip in’ -is the software to be used as a
course or as a reference or as a learning system.
·
Should software include pupil tests - may need to set up a file for pupil results/should software
decide level of tests.
Question 2
(a)
Generally well answered.
(b)
Most candidates could identify two problems that arose during change over to the new system.
Many candidates identified what problem had occurred e.g. ‘Staff may need training’ but only better
candidates expanded their answer to explain why e.g. ‘because commands to use the new system
are different’.
Question 3
(a)(i)
Most candidates could show data in BCD form but not all candidates realised that only four bits
were required so that two numbers could be stored in each byte.
(ii)
Few candidates indicated that each letter needed a unique code.
(iii)
Most candidates knew that a mantissa and exponent were required but few could provide a good
description of either part.
(b)
Full marks were obtained by nearly all candidates.
(c)(i)
Most candidates could explain the meaning of overflow and many candidates illustrated their
answer with a correct example.
www.studyguide.pk
(ii)
The principle of underflow was understood by many candidates but only the very best candidates
correctly related their answer to division of floating point numbers and identified that the result of
the calculation produced a fraction which was too small to be stored because the exponent was too
negative.
Question 4
(a)
Many candidates devised a hashing algorithm but few of these algorithms produced 2000 unique
addresses. A common incorrect response was to describe the production of a modulo 11 check
digit.
(b)
Candidates who had devised an algorithm usually stated two correct catalogue numbers.
(c)
Many candidates correctly identified one solution but few could clearly describe two different
methods of dealing with a collision.
(d)
Removal of an item that had produced a collision was understood by a minority of candidates and
clearly describe by very few. A good answer should have contained the following points.
·
search made to find item to be removed
·
dummy value inserted...
·
so that the space can be reused...
·
but not mistaken for blank space, so stopping access to rest of data.
Question 5
Few candidates drew a correct syntax diagram, a common wrong answer was to attempt to answer the
question using BNF.
A diagram similar to this was required.
Question 6
(a)
Most candidates provided excellent answers showing that they had learnt the stages of
compilation. However, some Centres had not covered this topic and their candidates gained very
few marks.
(b)
Most candidates could only identify program development as a circumstance where the use of an
interpreter would be appropriate. Other correct answers could refer to when it was important for
programmer to see results immediately e.g. teaching or when a program uses frequently used
routines, which the interpreter has stored the code for, producing fast execution.
Question 7
(i)
Better candidates, who discussed the full range of on-line banking services, gained more marks
than those who just discussed the use of ATM terminals.
(ii)
Candidates needed to be careful not to repeat points made in (i).
Question 8
(a)(i)
Candidates needed to show their understanding of top down design and relate this to fewer errors
rather than stating a list of advantages. E.g. Programmers can be limited to their expertise and
hence can be expected to produce fewer errors, shorter sections of code are less complex giving
rise to fewer errors, testing can be more comprehensive because the blocks of code are shorter.
www.studyguide.pk
(ii)
Most candidates correctly identified modules for ordering/delivery of goods; work rosters and
maintenance of personnel files but very few candidates identified the more generic Input and
Output routines.
(iii)
Both local and global variables were clearly explained but there were few good examples.
(b)(i)
(ii)
(c)
Validation was understood by the majority of candidates but not always clearly explained.
Candidates did not always ensure that their validation checks related to data input from a barcode.
Good clear answers from many candidates.
Question 9
(a)
Usually only the temperature and pressure sensors were correctly identified, only the better
candidates included a sensor to measure the flow of the two chemicals and a level sensor to
measure when the vessel was empty.
(b)
Most candidates produced an algorithm that would work for part of the process but there were few
totally correct algorithms.
REPEAT
OPEN W~X
REPEAT
IF VALUE OF A = MAX AND W OPEN THEN SHUT W
IF VALUE OF B = MAXANDXOPEN THEN SHUTX
UNTIL W AND X SHUT
MEASURE TEMPERATURE
WHILE T<REQUIRED TEMPERATURE
APPLY HEAT
ENDWHILE
TURN OFF HEAT
TIME = 0
REPEAT
REPEAT
IF TEMP ~ REQUIRED THEN APPLY HEAT
IF PRESSURE > SAFE LEVEL THEN OPEN Y
WHILE PRESSURE > SAFETY LEVEL
OPEN Y
ENDWHILE
CLOSE Y
UNTIL TEMPERATURE >= REQUIRED
TURN OFF HEAT
UNTIL TIME = 5
OPENZ
WHILE LEVEL<>0
OPENZ
ENDWHILE
CLOSE Z
UNTIL PROCESS SWITCHED OFF
(c)
Most candidates could explain at least one example of feedback
www.studyguide.pk
Paper 9348/03
Project
General comments
The projects submitted for the examination covered a wide variety of topics and the candidates were able to
demonstrate their skills in solving problems using a computer and the selected software. Almost all the
projects were written using Access and Visual Basic, there being very little done using traditional
programming languages. Unfortunately, this choice of software was not always the most appropriate for the
solution of the selected problem but the candidates still attempted to justify their choice, often using
arguments which were unrealistic. With so much of the code being generated by the software there is little
opportunity for the candidates to demonstrate their programming skills but there were some who attempted
something a little bit different and this is to be commended. With so much emphasis on Access and Visual
Basic, there is a danger that candidates will think they are the only products on the market and that they will
solve every problem there is. The statements made by candidates for choosing this combination over other
methods were invalid in many cases, showing that they had not really thought through the implications of
what they are planning to do.
Project choice and problem solution
The majority of candidates made a sensible choice of project and carried out a thorough analysis of the
situation they had identified for computerisation. There was a tendency to develop questionnaires as part of
the analysis and some of these were so vague that they would in no way obtain the information that was
required. Many of the projects contained transcripts of interviews between the candidate and the prospective
user of the software and these too contained much that was irrelevant to the analysis of the problem. In
some cases the justification of the choice of software was limited by the facilities available at the Centre and
the intended user but many candidates explored alternative solutions. In some cases the hardware
specification of the suggested system was extremely detailed with candidates describing the function of
every component of the computer. Some of the specifications of the computer systems required were far too
high with items of hardware specified that would never be needed. It would be very surprising to have a
payroll system within a small company that required a scanner, a bar code reader and a modem but these
were specified as being essential to the running of the system.
The choice of Visual Basic clearly made the design of input screens very easy but in many cases the
instructions on screen were not very helpful. Candidates were inclined to use coded field names as the
prompts for input and while these may be known to the writer of the system they are often not very helpful to
the user. Choice of colour was another problem as the wrong choice made some screens unreadable. Use
of a heavily patterned background did not improve clarity and candidates were using dark colours on a dark
background and light colours on a light background. Candidates should consider the user when designing
screens and should keep them as clear and simple as possible. The coding sections concerned with input
were very large as candidates included the code from every screen they had used. This code is very
repetitive and made the projects very long. Candidates should select the code for one screen and then
indicate which ones are similar, thus reducing the size of this section. It would also be helpful if candidates
added a few comments to the code in order to make things clearer for the reader who may not be totally
familiar with the language.
There is still a tendency to direct all output to the screen even if it was eventually printed. It is appreciated
that this is the trend for much of the modern software but the candidates should appreciate that in reality this
would not be acceptable if the volume of output was large. The production of a batch of invoices or payslips
by using a preview on screen followed by pressing the print icon might be fine if you were only producing a
few but to produce hundreds this way would not be sensible. In some cases the projects did not produce
any results of any sort. Data was input and it was simply left there, nothing was done with it so why was a
computer used. Simply inputting data to a computer does not necessarily improve an existing manual
system, something must be done with the data to justify the expense of setting up the computer system.
The design of output is something that candidates seem to find very difficult. There must be a question as to
whether they have really thought through who will be using their product as much of the output is of little use.
Screen output is often cluttered and using the same abbreviated field names as used on the input screens.
In some cases the output does not fit on a complete screen and the user has to scroll round to find some of
the items they require. Printed output is often very poorly presented as if the candidates have not really
www.studyguide.pk
thought about it. A good sheet of printed output should be clear and self explanatory without any need to
refer back to the documentation to interpret it. Candidates should be advised to look at the output from
commercial systems to see what they produce and then use this as a guide in their own designs. Once
again the use of the strange coded field names should be avoided with thought given to the reader. Numeric
output should be tabulated clearly with decimal points lining up and number corrected to two decimal places
in currency fields. Listings of data files should be planned to fit on a page, even if the page is printed in
landscape rather than portrait format. To print a file over several pages makes the output very confusing as
the reader has to line up the pages to make sense of the information. In many cases the finished systems
did not produce any printed output as all suggesting that the candidates had not really appreciated that most
commercial systems produce something that is printed. The problem of very long coding is present in the
design of output as well and again candidates should be selective where much of the coding is repetitive.
The sections of the report covering technical documentation were generally poor and did not reflect the fact
that systems need to be maintained and this part of the documentation is referred to when making changes.
There is no real need for detailed algorithms and flowcharts which say the same thing. Many projects
contained very large flowcharts which were simply the algorithm statements with boxes round them. There
must be instructions somewhere saying what processing is actually carried out and how it is done with
reference to which part of the code is to be changed if there are alterations to be made. For example in a
payroll program it would be expected that the rules for determining the pay and deductions should be clearly
stated so that changes can be made easily. Similarly, in a billing program the routines for calculating the
cost of items and details of delivery charges and any tax addition should be clearly recorded. Many of the
developed systems submitted by candidates ignored this completely suggesting that the system would be
unworkable if it was ever to be implemented.
The report should contain a user guide which gives clear instructions to the user on how to operate the
system without any reference to the technical documentation. Many projects contained very good user
instructions but others were simply pages of screen shots of input forms with very little else to help the user
understand what to do. In some cases, candidates produced the user guide as a separate document and
this clearly reflected the fact that they had appreciated the need for the user to have a simple guide to help in
installation and running the software.
As in many previous years, testing posed considerable problems for candidates. Many candidates still
considered that testing was simply a matter of producing a table of expected results as shown below:
Test No.
Test
Expected result
1
Test password
2
Test Main menu option Customer/Orders
Only ‘ABF’ accepted. Main
menu opens automatically
Customer Details form opens
:
:
:
33
Select Close Database
Software closes down
While this may be acceptable as a test plan, without any actual results of these tests printed in the report,
they are not acceptable as this table could simply have been produced on a word-processor and bear no
relation to the actual events. In many reports this table extended to several pages with no evidence that
anything had actually been near a computer. Other reports simply included a series of screen shots of input
screens, frequently blank or showing every possible validation check with error messages. While some
examples of tests are important, it is essential to show the final system working with real data and producing
real results. Actual output from the processing of real data must be included, with the output in printed form
and not obtained by using yet another screen dump. In this way the report will show that the candidate has
appreciated that the real world of computing involves printed output and not simply screen displays or printed
results obtained from screen dumps. Printed results should not be merged with the report text, it should be
produced on separate pages to show what is actually produced from the system.
At the end of the project report, candidates frequently say how successful they have been and how their
system has changed forever the operation of the company they have been considering. These claims are
unrealistic in many cases as operation of the new system would frequently cause the company to be less
efficient as well tried manual systems are replaced by computer systems that would be unworkable if they
were to be implemented. While it is good for the candidates to have produced software that may help to
solve a particular problem, to claim that it would really work if ever it was installed would be unrealistic.
www.studyguide.pk
Project reports and presentation
All reports were produced using a word processor although it was disappointing to see spelling mistakes in a
word-processed document. Candidates should be encouraged to use a variety of text styles and sizes to
improve overall clarity.
Many candidates seem to feel that Examiners will be impressed by the size of the report. With software
producing almost identical code for input forms a selection should be made for the final report. Similarly,
pages of identical results do not really add much to the final product and candidates should select the most
relevant items for their submission. Some projects were presented as a collection of pages with very little
structure. The final project is a piece of software and the report should reflect this in the way it is presented.
Clearly defined sections should cover the technical documentation and the user guide, the report will need
an index and the pages should be numbered.
Project assessment and marking
In many cases the standard of marking was satisfactory and few changes were needed to produce a
common standard. However, there was an increasing tendency to award marks for non-existent or
inadequate work and it should be noted that marks can only be awarded for work that is present in the report.
In Section D high marks were given when the testing was confined to a table as already described or was
totally inadequate. Moderators reduced the marks for candidates where the testing consisted of pages of
validation routines for input but no actual processing of the data that managed to pass all the various tests.
Similarly, in Section C, high marks were awarded by Supervisors for technical documentation which was
simply pages of flowcharts and user guides which reproduced the pages of screen dumps showing error
messages and blank input forms. A detailed mark scheme is provided in the syllabus booklet and it is
expected that most projects will fit this scheme. Where this scheme was used Moderators were able to
identify quickly where marks had been awarded and this simplified the task of moderation. If an alternative
scheme is used by the Centre then a detailed breakdown of marks in each section must be provided to allow
moderation to take place.
Paper 9691/01
Paper 1
General comments
The first paper of a new specification is always a nervous time for candidates and Teachers because there is
an element of not knowing what to expect. The same is true for the Examiner responsible for the paper
because there is little to compare with. However, this paper seems to have been a successful start to the
new specification. There was little evidence of candidates having time trouble, so hopefully the length of the
paper was about right. The paper elicited a good range of marks with far fewer candidates scoring very
poorly than in past sessions, while it maintained its academic integrity by providing a challenging test to the
better candidates. All questions gave the full range of marks available, though thankfully all the noughts
were not present on the same script!
Candidates’ work was well presented with very few examples of lack of care being taken. It is obvious that
most candidates are justly proud of their work and try to help the Examiners to give them credit. A few
candidates followed a current fashion for using the new pastel shades of ink to answer the paper. While
sympathising with their view that this gives their script a pleasant look, it is very difficult to decipher the work
when marking it under the light from a lamp late at night. Please encourage your candidates to use dark
blue or black. A few candidates used red ink, which causes difficulty when Examiners mark in red.
The comments of Teachers are actively sought and will be welcomed to the A/AS Level Computing email
discussion group that has been set up.
Teachers should be proud of their candidates, in the main, and also of themselves for obviously preparing
them well for a new and complex syllabus.
www.studyguide.pk
Comments on specific questions
Question 1
Intended as a nice easy starter question to ease the candidates into the paper. Almost all candidates scored
well, although there were a number who chose to describe the topologies in words rather than a diagram.
While this was not penalised in any way, the candidates penalised themselves because they made the
question more difficult. Some were unable to show the bus in the first diagram, or to come up with
advantages. The attention of Teachers is drawn to the published mark scheme for a list of acceptable
responses to part (b), as indeed it should be studied for all questions.
Question 2
The question worked well with a full range of responses from the candidates. There was a strong hint in the
question of the way to answer it. Sensible candidates took each of the bullet points in turn and wrote down
the facts that they knew about each. Better candidates would also have an eye on the number of marks
available and know that they needed to make 8 points to gain full marks. Candidates should not worry about
making some initial points that are either wrong or not worthy of credit, the Examiners will find their
responses that are worthy of credit and credit those. Candidates should also be aware that nowhere in a
Computing examination are there any marks for writing an essay. While candidates are not penalised for
doing so they may be penalising themselves if they are using their intellectual powers thinking about the
structure of the answer and not about the content. Teachers will note from the mark scheme that the
Examiners work from lists of acceptable responses and this may be a more suitable form of answer for most
candidates, particularly in the pressure of the exam room.
Question 3
The intention was to produce a relatively easy question on testing that would elicit three responses. One
would be of normal data, one of extreme data and one of unacceptable data or of unusual data. On
reflection the question might have been better worded to make it clear that the tests should have been on
different types of data, however the responses were generally of a high class from candidates and they were
able to overcome any shortcomings of the question itself.
Question 4
Many candidates did not read part (b) carefully enough. It was quite clear from the marks that there were
two marks available for each type of primary memory, one for the reason why it was appropriate for each of
the uses. This is a clear example of the value of the use of the mark points by candidates. Candidates
should always put themselves in the place of the Examiner after attempting a question. They should think to
themselves “Have I given enough information for the Examiner to be able to give me the four marks that are
available?” in this question the answer was often “No, I have only given two uses, so where are the other two
marks?”
A number of other problems arise in part (b). First, the term primary memory. This is a genuine attempt on
the part of the Examiner to make clear that the question refers to the memory and not to some peripheral
storage device. Many terms could be used including main memory and immediate access store, but this is
the one in the syllabus. A minority of candidates persist in interpreting ROM as a CDROM and consequently
talking about storing an encyclopaedia, which is not an acceptable answer in this question. Others have the
right idea but give the answer ‘BIOS’ being stored in ROM. The problem with BIOS is that the user defined
parts of BIOS cannot be stored in ROM because they need to be changeable by the user. If the candidate
explains this then they will not be penalised, but it is far beyond the scope of AS, far safer to stick to the
response of the boot program, because there can be no argument.
Question 5
Questions about hardware configurations are always marked in the same way. The Examiner is looking for
evidence of input, output, storage and communication. In this question, with 8 marks available, it is
necessary to list the devices and explain why they have been chosen.
A menu based interface does not in this question refer to the typical Windows screen. People can argue
over whether this is menu based, but whatever the outcome of such a discussion it is not sensible in this
example.
www.studyguide.pk
Question 6
An easy question to pick up full marks on. Attention is directed to the mark scheme for the expected level of
response.
The question stated that a diagram should be used, any candidate trying any written method of explanation
did not follow the instructions. Although not penalised for that they made the question unnecessarily hard.
The word ‘algorithm’ in part (b) should not mean that candidates should be put off, it does not say
pseudocode, and it never will. A diagram is the simplest way to explain the algorithm, with perhaps a line of
explanation saying that lute must be searched for in the list, and a note that if it is not found a report is made
to that effect.
Question 7
Too many candidates decided to write about four stages in the system life cycle rather than answering the
question. The intention was to ask a more complex question than is normally asked on this topic by
focussing on one aspect of the cycle. The question worked well and was a good discriminator between
candidates.
Question 8
A standard question. Notice that the question is in no depth about the types of switching. It cannot be at this
level. Two points: first, the distinction is not that the message is sent in packets, that is true for both types of
communication; second, there can be no mention of speed of communication being an advantage of either
as the speed of communication relies on other factors.
Question 9
Poorly answered. Basically the records need to be accessed in sequence when they are batch processed
with the transaction file to create the bills, and they need to be accessed quickly (hence the indexing) when
an enquiry is made about an existing record.
Question 10
Too many candidates confused this with random access (hashing). Attention is drawn to the scheme for the
expected response.
Question 11
There are still too many candidates who refer to restoring data from an archive file, but it is getting better
than it used to be. One criticism of the good answers was that too much information is given. There are only
4 marks for the question, long essays are wasting the candidate’s time.
Question 12
Almost all candidates could describe two validation tests but the question specifically states that they are to
be performed on the amount of money, which immediately makes some tests unreasonable.
Question 13
Just because it is at the end of the paper does not mean that the question has to be difficult and most
candidates were able to score well on this important topic.
Question 14
The distinction is between two types of report that an MIS will produce, not just any type of report. Again,
refer to the scheme for the expected responses.
www.studyguide.pk
Paper 9691/02
Practical Tasks
General comments
The overall performance of the candidates was very high. It was very clear that the large majority of
candidates had spent a great deal of time in producing their answers to the questions set in the paper, and
indeed, had taken great pride in the production of their work. In most cases, the work was clearly set out and
labelled accordingly, making the moderation of the work presented for examination that much easier.
The annotation of scripts by Teachers whilst marking, to clearly indicate where marks had been awarded,
proved to be very useful when externally moderating the work and all Teachers should be encouraged to
carry out this procedure in the future.
Many Centres included electronic copies of the work for moderation, which is not required. The only
evidence required is hard copy e.g. screenshots and listings on paper. In questions such as Question 2,
where candidates could use sound, video clips and animation within their work, moderation of such work
depends on the professional judgement of the Teacher when awarding the relevant marks.
Comments on specific questions
Question 1
This question on Jackson diagrams and algorithms was generally well answered with the majority of
candidates achieving very high marks.
(a)
Generally the question was well answered, with all candidates understanding the concept of
Jackson diagrams. Where marks were lost was mainly due to not generating two numbers,
keeping a running total of throws and outputting percentages for each throw.
(b)
Generally the question was well answered, with candidates producing algorithms in form of
pseudocode and program listings. Some algorithms listed alternative methods to the use of arrays,
indicated in the mark scheme, but candidates were not penalised for these alternative methods,
provide they worked correctly. There was a general lack of evidence of the output loop, or
alternative, starting at 2, necessary as the minimum total score possible is 2.
Question 2
The presentation produced by the candidates was, almost without exception, to a very high standard. It was
obvious that the candidates had put a great deal of time and effort into this question. Almost without
exception, candidates used MS Powerpoint to produce their answer.
(a)
The collection of the necessary data and information about the Centre was well conducted, with
many candidates interviewing the principals and leaders of their establishment, along with
collecting other information. Many candidates gave evidence of a storyboard or some other form of
progression through the presentation. Although many candidates discussed the merits of various
types of software for the presentation, very few gained marks for the design part of this question.
Only a small number of candidates considered the style of presentation possible e.g. colour
background, type, colour and size of font, what logo, images and photos to be used etc before
actually constructing the presentation.
(b)
All of the presentations were very interesting, well thought out and well produced. Unfortunately,
no marks could be awarded where there was no hard copy evidence of the presentation. As stated
above, electronic evidence in such situations cannot be accepted.
(c)(i)
Generally well answered. The majority of candidates gained marks for collecting data from the
audience, and for a suitable layout of the form, although fewer candidates gained marks for having
tick boxes, and other types of single-stroke entry forms.
www.studyguide.pk
(ii)
Most candidates achieved at least one mark for this question. In order to gain two marks,
candidates needed to show some numerical evaluation of their surveys, and the majority of
candidates failed to do this.
(iii)
Generally well answered, with any suitable and feasible improvements being accepted to gain the
marks. Full marks were awarded in cases where there was a good description of only one
improvement, as well as in cases where more than one improvement was noted. Suggestions for
improvements included better sound, better timing as well as the actual presentation style of the
slides.
Question 3
Not all candidates used spreadsheets, as was anticipated in the mark scheme. Candidates also used a
database or Visual Basic in order to produce an answer to this question. Provided the candidate’s solution
worked and satisfied the requirements of the question they were not penalised, and marks were awarded
accordingly.
Part (a) is examining the initial design of the solution, part (b) is examining the testing strategy for the
solution, and part (c) is examining the running of the solution. In some scripts, some of the answers covered
all of the first three parts of Question 3 and in these cases it was difficult to award marks as allocated on the
mark scheme.
(a)
Generally well answered, with the majority of candidates achieving all five marks here, independent
of the type of software used.
(b)
Again, generally well answered, although only a few candidates included double entry or visual
checking as means of testing the accuracy and validity of the data. Many candidates also included
validation checks on non-numeric fields.
(c)
Many candidates achieved full marks here, and this question was generally well answered.
(d)
Again, most candidates who answered this question achieved full marks. Many indicated that it
achieved what it set out to achieve, indicating that it was easy for Teachers to use. Possible
improvements offered included easier data input methods and a graphical or statistical analysis of
the results.
Download