https://www.e-assessment.com/glossary-of-terms/
Adaptive
Feedback
Feedback
given to students on an assessment item, where the feedback is modified
according to the student’s response , particularly where the response was
marked as incorrect, and typically aims to provide guidance on why the answer
was wrong.
Advanced
Question Types
Advanced
Question types are a level above Standard Question types. They typically
require complex setup and/or marking, or they contain complex marking
algorithms. Additionally, the level of interaction within the item may also be over
and above that of a Standard Question Type, and therefore they could be
considered as separate applications within a test.
Application
Programming Interface
The software
mechanism by which one computer program makes its functionality and data available
to another computer program. For example, a test-marking program might use an
API to request the correct marking key from the item bank program. APIs are
widely used in internet applications where programs are distributed across
different servers (for example in multi-vendor systems).
Assessment
An instrument
(e.g. an on-screen examination) used to make a judgement about learning, skills
acquisition or educational readiness or need against a set of pre-determined
criteria. Assessment instruments include examinations and tests, but also
portfolio and observation-based approaches to judgement.
Assessment
Delivery System
A combination
of software, hardware and communications components, as well as human processes
that manage the end-to-end process of administering e-assessments. Note: this
includes delivering the test package to centres, supporting the examination
administration functions, delivering the assessment to the candidate,
recovering their response, managing the marking and verification processes,
setting standards and providing assessment outcomes. It generally does not
include the writing and production of the assessment.
Assessment
Engine
The software
application that delivers an electronic test on a computer. The engine usually
consists of generic components (such as a web server) and proprietary software
(such as a specific e-assessment program). An e-assessment system typically
consists of an assessment engine, an item bank and an item and test creation
system.
Audio Capture
Question Type
The audio
question type enables the capture of sound created by the candidate. This could
be spoken sentences such as in a language test, or music from a performance.
Audio question types can sometimes start automatically as part of a question
workflow, or be controlled by the candidate as part of evidence capture. If
being used as part of language testing, it would be appropriate to combine the
audio capture question with either on-screen text or sound files. Visual
feedback on-screen is important so that the candidate is clear that the capture
has started and is recording audio. Capture can be open ended, however this
would generate larger file sizes. Consideration of file size is important, as
language testing could be being conducted in a location with poor internet or
power stability. Use of an on-screen avatar can sometimes help to improve the
quality of the conversation, if the candidate feels like they are speaking to a
real person, then the quality of the audio is less likely to suffer. Giving the
candidate the opportunity to playback the capture can also be useful. Audio
quality can be impacted by the confidence of the candidate, the quality of the
microphone or the background noise from other candidates. Simple techniques of
placing candidates completing an audio capture section of a test next to
candidate undertaking a written component can be good ways of reducing
background interference. The more practice a candidate has with this type of
question, the better they will perform. Classed as an Advanced Question Type,
audio capture questions are not typically auto-marked.
Authoring
Tool
Software used
to create items and tests for e-assessment. Note: the term is used more widely
to cover tools used for creating any content for delivery on-screen (e.g.
e-learning, general web pages, etc.).
Classical
Test Theory (CTT)
CTT is a set
of statistical measures relating to the performance of items within a test, and
the test itself, and used to provide evidence about the quality of a test and
the items within it. CTT’s most commonly used measures include facility (the
difficulty of a question), discrimination (the extent to which candidates’
performance on a question mirrors their performance on the test as a whole) and
Internal Reliability (the extent to which the test assesses a single
construct). CTT is relatively easy to implement and understand, but it cannot
distinguish between facets of the candidates and the questions (eg whether a
question is hard or a candidate cohort is weak), and for this reason Modern
Test Theory, including Latent Trait Theory, Rasch and Item Response Theory are
increasingly widely used.
Competency-Based
Assessment
An assessment
process based on the collection of evidence on which judgements are made
concerning progress towards satisfaction of fixed performance criteria which
describe the competency. Note: the competency-based assessment of an individual
takes no account of the performance of others in the wider group being assessed
(as is the case in norm-referenced assessment), and is typically limited to a
pass/fail grading (also called mastery/non-mastery). Competency testing is
typically used for licence to practice assessments (for example for doctors,
pilots, etc.).
Computer
Adaptive Test
A CBA test in
which successive items in the test are selected for presentation by a computer
algorithm drawing primarily on the properties and content of the items, and the
test taker’s response to previous items. Adaptive tests are typically used in
summative tests where a more accurate test result (compared to a linear test)
can be determined for a given test duration (or a shorter test can be
provided). For diagnostic purposes, adaptive tests allow a more detailed
exploration of strong and weak areas in a test taker’s knowledge within a given
time for a test. Two key features of adaptive tests are that (a) test takers
cannot return to questions once they have moved on and (b) the test taker’s
responses to items must be computer-marked.
Computer
Based Training
A term from
the 1980s describing learning which is mediated by a computer (typically a
stand-alone computer). Note: the modern term ‘e-learning’ differs in that there
is an expectation with e-learning that more of modern computing’s capabilities
(rich media, interactivity, etc.) will be used, as well as the computer’s
connection to internal networks and the internet.
Computer-Assisted
Assessment
Computer-aided
assessment (or “computer-assisted assessment”) describes assessments delivered
with the help of computers. This includes assessments delivered to the
candidate on-screen, developed on computer but delivered on paper, marked
on-screen or electronically (eg using OMR).
Computer-Based
Assessment
A subset of
CAA where the candidate is presented with the question on-screen and responds
to the question using the computer.
Content
Management System
A general web
term for a software tool which manages and maintains the technical and
editorial content of a website. Note: in an e-assessment setting, it describes
a software tool for managing assessment content (items and tests) and will
generally comprise an Authoring Tool/System and an Item Bank.
Delivery
Method
Delivery
method refers to the way in which the test is taken by the candidate, this can
be online, offline, secure locked down or open book.
Delivery
Platform
The platform
being used on the client device, for example PC running windows, Linux running
Ubuntu, Smartphone running Android, Apple Mac running iOS or windows etc.
Diagnostic
Testing
Non-accredited,
assessment used as part of a learning programme to identify a learner’s
strengths and weaknesses with a view to providing an appropriate learning
programme. Generally undertaken at the start of a programme, diagnostic
assessment therefore needs to evaluate learners’ existing levels of attainment
across the range of relevant knowledge, skills and understanding, so as to
inform personalised learning.
Digital
Logbooks
An online
record of practice experience and skills that are assessed. The logbook often
complements formal examinations.
Drag-And-Drop Question Type
The Drag and
Drop question type requires candidates to drag answer options into the relevant
drop zones; this can be useful for a number of question scenarios.Drag and drop
items can be text, video or image-based. Classed as a Standard Question type,
auto-marking is possible with this question.
e-Assessment
E-assessment
describes a range of activities where technology is used to enhance some or all
elements of an educational assessment process.
Common activities are on-screen tests (the candidate reads and answers
the question on-screen), electronic marking (the marker marks on screen –
either a scanned candidate script or an on-screen response), remote proctoring
(candidates taking on-screen tests are invigilated remotely), item banking
(test questions are stored in a database and assembled into tests electronically),
and e-portfolio (candidates assemble digital evidence of their learning for
skills assessments).
E-portfolio
An
e-portfolio contains digital items – ideas, evidence, reflections, feedback
etc., captured on the web which someone can present to a selected audience as
evidence of their learning and/or ability.
E-portfolio
Management system
An
e-portfolio assessment system enables the learning and skills of a number of
individuals to be captured and assessed online.
Either/Or
Question Type
The Either/Or
question type requires candidates to select the correct answer from two
different answer options and is most often used for questions where the options
are true/false or yes/no. Classed as a Standard Question type, auto-marking is
possible with this question.
End Point
Assessment
The final
assessment in the UK’s reformed apprenticeship, which takes place after the
student’s employer has signed off the student as ready for assessment.
Typically the assessment is graded and designed to show that the student is
competent across the range of skills and knowledge that have been covered
during their apprenticeship.
Equation
Entry Question Type
The Equation
Entry question type provides candidates with an equation creation tool, which
allows them to enter complex equations into the answer box. Classed as a
Standard Question type, auto-marking depends on the complexity of the equation
and the capabilities of the technology.
Essay Style
Question Type
An Essay
question requires the candidate to write an extended response in answer to a
question. Typically, this question type will provide the candidate with text
formatting features, and sometimes the ability to insert symbolic information
such as equations or symbols. Due the length of time a candidate may spend on
this question, it is important that their responses are securely stored at
regular intervals to prevent loss of data from due to computer error. Options
around the permitted response length can often be set to control the amount of
information the candidate provides. It is important that the candidate is able
to view all information provided either through an expanding window or
scrolling. Classed as a Standard Question type, auto- marking is not possible
with this question and human-marking is required.
Extended
Matching Question Type
The Extended
Matching question type allows item authors to create questions where candidates
are required to respond by making correct links between two lists of options.
There are a number of linking options available within this question type:
these include a one to one relationship, one to many, or many to many. Classed
as a Standard Question type, auto-marking is possible with this question.
Familiarisation
Test
A test used
by either the candidate or the centre. The candidate would use the test to
simulate the real assessment so that they can become familiar with the delivery
interface, question types and test structure. A centre may use the test to
highlight any technical or process problems ahead of the exam day.
File Attach
Question Type
The File
Attach question type allows test providers to present candidates with a file
which they can change or edit before uploading back into the test for marking.
This question type is perfect for any assessment that requires candidates to
use external software, such as Microsoft Word or Microsoft Excel. The files can
be pre-populated with information or can be left blank. The files can be
launched securely to prevent the candidate accessing additional files or other
applications. Typically both the candidate’s device and the marker’s device
will require a licence to the file type’s software, otherwise access to view or
edit the file might not be possible. Classed as a Standard Question type,
auto-marking is not normally possible with this question type and therefore
requires human-marking.
Fill in the
Blank Question Type
The Fill in
the Blank question type requires candidates to fill in blank spaces in a
passage of text. Multiple answer options can be set for multiple blank spaces
within a passage of text. Fill in the Blank can be used as an alternative to
using the Select from a List question type. The space provided for the response
typically expands to the answer given so as not to provide the candidate with a
clue to the response required. Classed as a Standard Question type,
auto-marking is possible with this question.
Hotspot
Question Type
A Hotspot
question requires the candidate to select one or more areas within an image.
Variations of this include providing a single point answer with options for
marking tolerance around the point. Multiple points can also be used along with
area mapping. In most cases it would be unreasonable to expect the candidate to
pick an exact pixel, therefore marking tolerance is normally recommended.
Classed as a Standard Question type, automarking is possible with this
question.
Interoperability
A feature of
computer systems components which allow the components to interact to exchange
information according to technical standards which define functionality useful
to the user. The IMS QTI specification is an example of an interoperability specification
within the e-assessment domain. IMS QTI allows the transfer of test questions
(and their associated metadata) from one system to another.
Item Response
Theory
IRT is a
commonly used Modern Test Theory statistical approach to measuring the performance
of candidates and test questions. Part of a wider group of Latent Trait Theory
approaches (including Rasch analysis) it is based on the idea that the
probability of a correct response to a question is a mathematical function of
both the candidate’s ability and the item’s difficulty (in contrast to CTT
where the difficulty of a question is fixed). IRT analysis is more complex to
implement and interpret but provides analysts with information to distinguish
between the ability of a candidate and the difficulty of an item, thereby
allowing comparisons of different candidate groups and/or different tests.
Likert Scale
Question Type
Typically
associated with surveys, the Likert Scale Question Type provides the candidate
with the means to answer according to a scale (for example “Strongly Agree,
Agree, Neutral, Disagree, Strongly Disagree”). These question types are
generally used in surveys. The scale may have fixed points and is therefore a
horizontal MCQ, or have variable points which would therefore need a degree of
marking tolerance. Classed as a Standard Question type, auto-marking is
possible with this question.
Locked Down
Locked Down
refers to the delivery method used during the assessment. An assessment in
Locked Down mode should prevent the candidate gaining access to applications or
internet sites not permitted during the assessment.
Manual
Test Generation
The manual
(human) process by which a test instance (a test form) is generated from a bank
of items (according to a formal or informal set of rules which may involve a
selection algorithm and randomisation). Note: this is widely used in
e-assessment settings where the candidates do not all take the test at the same
time, hence a variety of tests are required (to reduce the likelihood of test
and question exposure) but where known comparability is required between the
tests so that fair results can be given. Also see Automated Test Generation,
which is increasingly used where large numbers of test instances are required.
Metadata
Reference
data about a piece of information (e.g. an assessment item) that enables it to
be systematically stored in and retrieved from a database (e.g. an item bank)
according to a variety of selection criteria. In the context of assessment,
metadata might typically refer to aspects such as qualification or test
specifications, curriculum content and performance statistics. Note: metadata
is most useful when it conforms to an open standard, e.g. IMS LOM or QTI.
Mobile
Learning
A type of
e-learning where the learning is undertaken using a mobile ICT device (e.g. a
PDA, mobile phone or smartphone, handheld computer, etc.). The availability and
popularity of handheld portable devices has led to some research into the use
of m-learning techniques for e-assessment.
Multiple
Choice Question Type
A Multiple
Choice question (MCQ) gives candidates a number of answer options
(“distractors”) to choose from with only one correct answer (the “key”). These
are the most common on-screen question types in use within Computer-Based
Testing. Text is typically used in the answer options, but images or equations
may also be used. In some versions of MCQ tests, candidates may be asked to
provide their confidence level that the selected answer is correct, which can
then be used as part of the scoring calculation. Typically answer options are
randomised to reduce assessment malpractice, however care has to be taken not
to use punctuation on the last item only, or to use terms like ‘All of the
above’ if the intention is to randomise the answer order. Due to the correct
answer being presented on-screen, consideration is required when authoring to
prevent the correct answer from standing out, e.g. candidates can sometimes
guess at the ‘longest answer option’ using the logic that the author has had to
be explicit in the answer. This question type may also use weighted marking,
where the award for one option may be greater than another. Classed as a
Standard Question type, auto-marking is possible with this question.
Multiple
Response Question Type
A Multiple
Response question is similar to a Multiple Choice question, except more than
one answer option is correct and candidates may be asked to respond by
selecting all of the correct options. This type of question can sometimes use
images or equations. Variations may also include combination responses and
controls on how many responses can be given. Classed as a Standard Question
type, auto-marking is possible with this question.
Navigation
In an
e-assessment context, the on-screen buttons and other controls that move candidates
from screen to screen in an on-screen assessment (typically, from question to
question), and provide access to other non-question specific features such as
on-screen help, print functions, exit, etc). They are generally visually
separate from controls that relate to the specific question.
Non-question
Items
Non-question
items include: introduction pages, seen at the start of a test; information
pages, which may be seen by candidates during a test; and finish pages, seen by
the candidate after they have submitted completed their tests and submitted
their responses. Non-question items will not have a mark assigned to them.
Typically non-question items may be visible outside of a timed section of a
test.
Non-scored
Questions
A non-scored
item is a question (of any type) which has been included in a test to gather
performance data, but does not impact the score or result of the test for the
candidate. For example, it is a common practice to pilot new questions embedded
within a live test, but where the scores on the pilot items do not count toward
the candidate’s score.
Numeric
Entry Question Type
A Numerical
Entry question allows candidates to only enter numbers as the response to a
question. This can either be an exact value, or within a range set by the item
author. Consideration should be taken regarding any symbols which are expected
with the answer, such as $, £ or €. When considering international delivery the
use of commas or decimals should be considered. Classed as a Standard Question
type, auto-marking is possible with this question.
Objective
Question
A test item
in which the response is evaluated against objective criteria. This might be a
single simple response, such as in a multiple choice item where the criterion
is whether the student has made a correct selection. It might be that the
student’s response has certain properties which can be established objectively.
In CAA this is usually done automatically.
Offline
Assessment
An on-screen
assessment which is conducted without using an internet connection during the
test (although an internet connection may well be used to deliver the test to
the client computer prior to the test starting, and to upload the candidate
responses once the test has completed).
On programme
assessment
Assessment in
the UK’s reformed apprenticeship, which takes place during the apprenticeship
learning programme, and prior to the student being signed off by the employer
for the End Point Assessment
On-Demand Assessment
Used in
public examinations. Assessments where there is a high degree of flexibility in
the date and time that tests can be offered to suit the student or their
learning programme (although it may not necessarily include all days, times and
dates). In contrast to many traditional assessments which are provided on a
fixed date and time (or a limited range of dates and times).
On-Screen
Assessment
An assessment
delivered to the candidate on a computer screen, and where the candidate
provides their response on-screen (for example by typing, or clicking on the
correct response).
Online
Assessment
An on-screen
assessment which relies on an internet connection during the test to download
subsequent questions and upload candidate responses. Sometimes termed
“conducting a test live over the internet”.
Online/Offline assessment application
The ability
to capture and assess evidence of learning and skills to an e-portfolio when àn
internet connection is unavailable and for this evidence and assessments to be
added to the e-portfolio once an internet connection is established.
Open source
e-portfolio
This is a
personal learning environment mixed with social networking, that allows an
individual to collect, reflect on and share their achievements and development
online in a space they control.
Open
Standards
Shared,
freely available and internationally agreed standards for computer-based
systems, designed to enable communication and interoperability.
Optical
Character Recognition
Optical
Character Recognition (OCR) is a means by which a computer can recognise text
and other marks in handwritten responses on paper that have been scanned, and
convert these to digital format. OCR is often used in assessment to
electronically mark paper responses to multiple choice tests (for example, on
bubble sheets)
Optical Mark
Reader
A device that
scans paper-based tests and converts marks made by the student using pen or
pencil into digital data.
Parameterised
Item Type
An item where
parts of the question are generated according to a formula embedded within the
question. This randomisation is usually undertaken dynamically, using the
formulae, as the test is generated or delivered to the student, in contrast to
cloning where item variants are generated during authoring by use of the
randomisation parameters. The term parameter refers to the variables that are
used by the formula to create the item instances. Many different types of
questions (e.g. MCQ, gap fill, etc.) may use parameterisation.
Personalisation
The
configuring of an IT system by students to suit their personal requirements
(e.g. selecting preferred font sizes and colours, volume levels for audio, et
al). Also refers to more complex customisations of the user experience to meet
personal learning needs.
Polytomous
Item
An item
having more than two response categories. For example, a 5-point Lykert type
scale where items can be scored 0, 1, 2, 3, 4, or a partial credit question
where candidates can score between 0 and 4 marks. Polytomous is a term
typically associated with IRT and other forms of latent trait analysis.
Portfolio
Assessment
An assessment
where a student’s portfolio of assembled work is assessed. This type of
assessment is distinct from a test (which is administered on a single occasion)
Practicability
The
feasibility of an assessment in terms of operational efficiency and viability.
A valid and/or reliable assessment may not be practical due to the cost or time
required to carry it out. High quality assessments are valid, reliable and
practicable, although there are typically trade-offs to be made between these
three key elements of test quality.
Proprietary
Software
Software
requiring a licence (for which a charge is usually made) from a particular
company.
Psychometric
Test
Provides one
or more measures of a candidate’s personality traits. It is most commonly used
as part of the selection process for employment, or to support careers
guidance. Typically it seeks to place candidates on a number of scales
according to their preferred behaviours or aptitudes for aspects such as
working with others, managing pressure, preferred working environment and
thinking style.
QTI Lite
A
simpler-to-implement version of the QTI technical interoperability
specification for tests and items which allows tests developed in one system to
be delivered to candidates on other systems (currently at version 1.2). See
www.imsglobal.org/question/index.cfm#version1.2lite
Question and Test Interoperability
A technical
specification for tests and items which allows tests and test items to be
authored and delivered on multiple systems interchangeably. It specifically
relates to content providers (that is, question and test authors and
publishers), developers of authoring and content management tools, assessment
delivery systems and learning systems. It is designed to facilitate
interoperability of assessment content between systems.
Rich Feedback
Feedback
which goes beyond providing the correct or model answer to an item, and a
simple explanation of why the student’s selected response was wrong. Rich
feedback is usually personalised to the candidate’s response and designed to
deal with the underlying misconception.
Select from a List Question Type
The Select
from a List question type requires candidates to select missing word(s) from a
passage of text by choosing from a drop-down list of answer options. Select
from a List items can often be used multiple times within a passage of text.
Random ordering of the answer options may be enabled to help prevent
malpractice opportunities during the assessment. Classed as a Standard Question
type, auto-marking is possible with this question.
Short Answer Question Type
A Short
Answer question will allow candidates to respond to a question with a word or
short phrase, as specified by the test author. Multiple answer variations can
be provided and settings such as case sensitive can be included. Classed as a
Standard Question type, auto-marking is possible with this question, although
some responses may need to be human-marked.
Source Materials
Source
materials are files such as images, PDFs or editable material that are made
available to a candidate during an assessment (for example containing reading
material, formulae, etc.). It is useful for the candidate to be able to
interact with this material by making notes or highlighting sections.
Spreadsheet
Question
The
Spreadsheet question type is common in assessments of accountancy. By embedding
a spreadsheet within a question, the candidate does not need to access a third
party application. The question can be pre-populated with data to be statically
visible to the candidate or enabled for editing by the candidate. The question
can also be provided to the candidate as blank so that they may present their
data in a form deemed suitable. The spreadsheet question type allows the
formatting of text and data, and also supports the inclusion of formulae and
other common spreadsheet features. The spreadsheet question may also provide
candidates with the ability to justify their responses with annotation. Classed
as an Advanced Question type, auto-marking can be achieved by logic-based
marking rules which can mark a fully correct answer or an answer where the
calculation is correct however the source data is incorrect.
Standard
Question Types
Standard question
types are typically QTI supported items; however, definitions vary. In some
locations, standard question types like Drag and Drop can be considered
Advanced. A preferred way to define standard question types is where the
authoring and marking do not require complex coding, and that the item can
typically be auto-marked by the computer, or marked by a human without the need
for complex marking tools.
Table
Question
Tables are
commonly used for the presentation of data or to provide a means by which a
candidate can record their responses in table form. The table question type can
either be created by the test author and then presented as static information
for the candidate, or provided as a partially completed table for editing by
the candidate. The question type could also be provided as a response option
alongside other options such as graphs and charts. The table question may also
feature formatting of text and alignment of cells. Classed as a Standard
Question type, automarking is possible with this question type.
Test Package
The
electronic package of files in an eassessment system which includes the test
content as well as any embedded applications, resources, and sometimes the test
player, which is delivered to the Client PC for the student to undertake. The
test package may have travelled from a central server at the Awarding
Body/Certification Agency, to a local server in the test centre and then to the
Client PC, and will be subject to considerable file and channel security in the
case of a high stakes assessment.
Test Player
or Test Driver
A piece of
software which resides on the client PC and “runs” the test content package,
i.e. displays the questions on-screen, collects the student responses, controls
the exam clock, and access to other resources, etc. Some test players are
considerably more complex than others. For example, a test player for MCQ items
may be little more than some scripts embedded in standard HTML pages, whereas
the test player for innovative items or a sophisticated test made up of
simulations and embedded applications would be a substantial computer
application in itself. Sometimes test players are pre-installed on client PC’s
and sometimes they are delivered to the client PC along with the test content
itself.
Video Capture
Question Type
The video
capture question type enables the capture of audio and video. Associated more
with evidence-based assessment, files can either be recorded directly into the
assessment or attached as evidence. Classed as an Advanced Question Type,
marking would typically be undertaken on-screen by a human marker.
Web-Based
Assessment
An assessment
delivered from a server via the Internet or an Intranet (such as a centre or
education authority Intranet) and where candidates access the assessments using
a standard browser. If Internet-based, the assessments could be online with
candidate responses delivered to the server in real time for automatic marking
and immediate feedback.
Không có nhận xét nào:
Đăng nhận xét