Is MarkManager
right for me?

On-screen marking is applied today in education
in several different contexts. Here's a list of
MarkManager characteristics that suit typical use cases.

Contact Us

"High-stakes"
assessment

MarkManager is designed for examinations and tests where the outcome is important to the candidate. The solution's sweet spot is therefore "high stakes" assessment.

Rigour is built in right through the marking cycle to ensure equity, accuracy and reliability of marking. Examples include GCSE and A-level examinations in the UK.

High-candidate count
scenarios

MarkManager is not designed for marking of assignments in a small class or tutorial group. Rather it thrives on a large volumes of candidates, where economies of scale apply.

Examples include an examination series across multiple schools in a school region, certification tests for a global or national professional body and University Entrance Examinations for multiple universities. Projects, assignments and performances which apply to a large cohort are also in scope. A minimum of ten thousand marker task assignments is a rule of thumb that can be applied, which if each candidate sits multiple tests might relate to a smaller number of candidates.

Significant proportion of
questions not multiple choice

Multiple choice responses can be auto-marked at the scanning stage using Optical Mark Recognition (OMR) technology, without the need for on-screen marking solutions like MarkManager. Similarly objective responses captured in an online test can generally be auto-marked.

MarkManager’s primary role is the marking of responses by humans. The types of written responses most suited to MarkManager are short-form text (including objective responses in pen-and-paper tests), long-form text, diagrams and schematics, or a combination of these response types.

A single test can contain a combination of multiple choice questions and questions which require an objective or subjective response. MarkManager can extract only those responses which require human marking.

Common marking scheme
across a set of responses

MarkManager requires a consistent set of criteria for evaluating candidate responses. During the configuration process, a marking scheme (or rubric) is defined in MarkManager which captures the basis for awarding marks, for example whether holistic or criterion-based marking applies to a given item.

 

For pre-printed examination booklets, MarkManager can make allowance for a degree of choice in relation to which option is selected by the candidate for one or more writing task topics.
For projects and assignments, each candidate can specify his or her own task, provided the marking scheme is consistent. For example, candidates might choose their own writing task and then provide an answer to their own choice of topic. This can be accommodated as long as the marker has visibility to each candidate's selection of topic. Likewise musicians can choose their own piano piece to play, as long as the marking criteria are agreed in advance and consistent. For written tests, both combined Question & Answer booklets and separate Writing booklets are supported.

 

 

Complementary Content
${loading}