Professional Documents
Culture Documents
Jwjshs
Jwjshs
Jwjshs
Company name:
S.A.S _________________________________________
Contact name:
diego.tafur@pearson.com _________________________________________
3158520718
1) Photocopy of the citizenship card (cédula - Document which proves the legal status of the
de ciudadanía) of the Legal bidding entity or individual
Representative - Financial statements: Balance Sheet and
2) Photocopy of RUT (Registro Único Income Statement as of 31st December 2020
Tributario)
3) Certificado de Existencia y To provide proof of the technical capacity, the applicant
Representación Legal (issued within the must submit supporting documentation. appending the
last 30 days of the submission date) following:
4) Financial statements: Balance Sheet and
Income Statement as of 31st December - List of work team members who would be
2020 assigned to the service for this contract and
their role
To provide proof of the technical capacity, the applicant - Brief CV with the information directly
must submit supporting documentation. appending the associated to the service to be provided
following: - Three (3) business references preferably
related to services like the ones specified in
5) List of work team members who would be this ITT
assigned to the service for this contract
and their role
6) Brief CV with the information directly
associated to the service to be provided
7) Three (3) business references preferably
related to services like the ones specified
in this ITT
___________________________________
Instructions
1) Provide Company Name and Contact details above.
2) Complete Part 1 (Supplier Response) ensuring all answers are inserted in the space below
each section of the British Council requirement / question. Note: Any alteration to a
question will invalidate your response to that question and a mark of zero will be applied.
1.1 Responses will be scored according to the methodology as set out in Evaluation Criteria section of the
tender document.
SV01 10% Supplier Note: Please refer to Procurement Policy Note (PPN) 06/20 before completing this
criterion. PPN 06/20 Social Value (Maximum word count 750 Words)
Pearson has increased access to learning for underserved groups through new and existing
products and partnerships, identifying strategies to overcome barriers. These groups include, but
are not limited to, women, racial minorities, low-income groups and people with disabilities
We strengthen existing and created new processes, Editorial Policy, and partnerships to eliminate
bias and represent the consumers we serve, including based on race, ethnicity and gender, in our
products and through our content providers. In Pearson LATAM we have materials such as Our
Stories a series that is developed and based around Diversity and Inclusion. Characters represent
all ethnic groups and physical abilities.
We believe in the power of learning and want to help everyone get access to quality life-long
learning. It’s why we are contributing to UN Sustainable Development Goal (SDG) 4, which calls
for inclusive education and lifelong opportunities for all by 2030. Our Stories is developed taking
dyslexic students into consideration, in both content and design. Dyslexic students using product.
Also, we provide Wizard books in braille and have platforms that are prepared to deliver content
for people with disabilities.
• How can this contract be an opportunity for your organisation to contribute on the
improvement of educational or any other conditions of underage students in
vulnerable communities?
We believe it’s our responsibility to help people overcome barriers to learning. Whether they’re
held back by health challenges or facing socio-economic hurdles – we focus on under-
represented groups including women, minorities, low-income families, and people with disabilities.
These are important social issues for us, and we know that many people within these groups
would benefit from additional learning and employability skills.
The best way for us to leave a lasting positive impact is to leverage the strengths of our
organization to help underserved people access opportunities to learn. We have many
products and services that help people access quality education and develop essential skills for
employment.
Supplier Response:
1. Test Features
Introduction
Level Test (Entry Test): An overview
The Pearson English Level Test is an online adaptive test of English language proficiency
developed to provide teachers with information that will help them stream students and
place them in the right class.
The test measures a student’s English language competency and reports a CEFR score
from pre-A1 to C2, together with a GSE score range between 10 and 90. It also presents a
high-level view of a student’s performance across multiple skills (Reading, Writing,
Listening, Speaking). There are two versions of the test – one that tests writing, reading,
and listening (together with the enabling skills Grammar and Vocabulary) and another that
tests the same skills plus Speaking.
The score report is provides the overall test result, with high-level skill ratings and
performance summaries. Reporting is designed for teachers to place prospective students
into classes, and results can be reported at both the group and individual level.
The test is delivered through the proficiency assessment portal, Pearson English Test Hub,
which also stores and displays the results of the test.
Who is it for?
The test is designed for students who are 14 or older. It is not Junior or Primary focussed or
designed to assess English for specific purposes (e.g., business). It can be used before any
adult or upper secondary course.
Using integrated skills questions means that Level Test is a better test of a learner’s English. In
real life and in the classroom, learners use more than one skill to complete communicative
tasks. To order something in a restaurant we need to listen and speak, to take notes in a
classroom we need to listen and write. Integrated skills questions test how well learners can use
the skills they have learnt and practised in the classroom and used in real life.
Test design
Level Test is a computer-based test normally taken on school premises; however, it can be
taken at home if the test taker has access to a computer with headset and microphone (if
taking the 4 skills version). Each administration takes 20-30 minutes, and the results are
available within minutes.
Level Test employs an adaptive method to select questions to present to the student. The
test uses an adaptive algorithm which takes a student’s answers to a previous question to
select the most suitable question to present next. It selects these items from a large item
bank making each student’s experience different. The adaptive nature of the test allows
Level Test to estimate a student’s English proficiency quickly and accurately.
The content of the test is very closely linked to the Pearson GSE scale, which guides
content production at every level. To aid with this linking, each question is tagged with a
GSE score and a GSE can-do statement.
Test development
The questions in Level Test are based on those developed for the Benchmark Test and
have been developed by international teams of experienced writers. Teams were based in
the UK, Australia, the USA, and Hong Kong.
Once written, all questions are reviewed by the teams in the different countries. Comments
and suggestions for improvement are stored with the test questions on a secure database.
The questions then go through a further review by an expert panel and decisions are made
on the quality of the questions, which to keep and which to reject. All questions are then
thoroughly checked by Pearson staff and images and high-quality recordings are added to
complete the questions before they go forward to be calibrated in a large-scale field test.
After the field testing, further checks are made on item quality based on the measurement
characteristics of the questions. Questions are eliminated from the item pool if they are too
easy or too difficult, if weaker learners get them right but stronger learners get them wrong,
or if they show any bias. These checks then result in a bank of the best quality questions.
Questions are selected from this bank to go into the final tests.
Once the automated scoring system was developed, the responses from the validation set
were run through the same psychometric model to produce overall and sub-skill scores for
each candidate. Those human and machine scores were then correlated to compare how
similar those two kinds of scores are for each person. When candidates were identified as
having extreme scores (i.e., well outside the reported score range of the GSE and not well
estimated), or when they had fewer than five responses which were able to be scored in a
particular skill area, their scores were excluded from the analyses. This reduced the number
In the following table, we define how the Global Scale of English is related to the CEFR
levels. To give an impression of what the levels mean, i.e., what learners at particular levels
can do, we use the summary descriptors published in the CEFR (Council of Europe, 2001,
p. 24) where they exist.
Global CEFR Description
Scale of band
English
GSE 10-21 <A1 This level of proficiency is likened to a tourist who may know some
individual words but does not have enough control of language to
produce full sentences and mostly communicates with words or very basic
phrases.
The words they do know may carry a lot of communicative meaning or
be effective when used with hand gestures or when the context is very
clear (e.g., pointing to an object in a shop).
GSE 22–29 A1 Can understand and use familiar everyday expressions and very
basic phrases aimed at the satisfaction of needs of a concrete type.
Can introduce him/herself and others and can ask and answer questions
about personal details such as where he/she lives, people he/she knows
and things he/she has.
Can interact in a simple way provided the other person talks slowly and
clearly and is prepared to help.
GSE 30-35 A2 A2+ Can understand sentences and frequently used expressions related
and 36-42 to areas of most immediate relevance (e.g., very basic personal and
family information, shopping, local geography, employment).
Can communicate in simple and routine tasks requiring a simple
and direct exchange of information on familiar and routine matters.
Can describe in simple terms aspects of his/her background,
immediate environment and matters in areas of immediate need.
GSE 36–42 B1 B1+ Can understand the main points of clear standard input on
and 43-58 familiar matters regularly encountered in work, school, leisure,
etc.
Can deal with most situations likely to arise whilst travelling in an
area where the language is spoken.
Can produce simple connected text on topics which are familiar or
of personal interest.
Can describe experiences and events, dreams, hopes, and ambitions
and briefly give reasons and explanations for opinions and plans.
GSE 59-66 B2 B2+ Can understand the main ideas of complex text on both concrete
and 67-75 and abstract topics, including technical discussions in his/her field of
specialisation.
Can interact with a degree of fluency and spontaneity that makes
regular interaction with native speakers quite possible without strain for
either party.
Can produce clear, detailed text on a wide range of subjects and explain
a viewpoint on a topical issue giving the advantages and independent
disadvantages of various options.
GSE 76–84 C1 Can understand a wide range of demanding, longer texts, and
recognise implicit meaning.
Can express him/herself fluently and spontaneously without much
obvious searching for expressions.
Test Questions
What kinds of questions are in the test and what do they measure?
The test has several different question types. This gives learners a chance to demonstrate
their English skills in different ways. There are questions where learners choose the correct
option or where they write the answer to an open question. There are questions where the
learner repeats or copies what has been said as well as questions where learners describe
something.
Because Level Test is adaptive, different students will see different questions and may not
be presented with all the question types described below.
The 3 skills version of the test takes 20 minutes to complete, and the 4-skill version takes
30 minutes.
Reporting
The test reports half CEFR bands from <A1 to C2 with the corresponding GSE range. Sub
skills are reported using a short performance summary and a 3-step rating system (above,
at, or below level) to provide a high-level view of performance and indicate stronger or
weaker skill areas. This skills profile can be used by teachers to help tailor the course
content and focus on the needs of their students.
Skill scores (listening, reading, speaking, writing) are based on test items that assess those
skills, either as a single skill or integrated skill tasks.
The Level Test result can also be used to assign the appropriate Benchmark Test once they
have been assigned to a class.
Level Test results can be provided at the individual or group level, depending on the needs
of the school and teacher.
The Pearson English Benchmark Test is a course agnostic online assessment of English
language proficiency for individuals and groups of students. It can be used to as a one-off
test to assess Reading, Writing, Speaking and Listening skills or to measure progress over
time. The test measures a student’s English language competency and reports a CEFR
score from pre-A1 to C2, together with a GSE score between 10 and 90.
The score report provides the overall test result, with skill scores, performance summaries,
and recommendations for future study. Results can be reported at both the group and
individual level. The assessment is delivered digitally through the Pearson English Test
Hub, which also stores and displays the results of the test.
This information allows the teacher to measure student progression and make decisions
about adapting learning material to suit the level of learners, as well as providing extension
activities where needed. It also allows the teacher to tailor the learning program to particular
learners, giving extra support and input where required.
Who is it for?
The test is designed for secondary and adult learners who are aged 14 or older. Benchmark
Test can be used alongside any adult or upper secondary course and is intended to be
used with comprehensive integrated skills courses not short or partial courses.
Several the questions on the test are integrated skills questions. These questions test more
than one skill at the same time. Using integrated skills questions means that Benchmark
Test is a better test of a learner’s English. In real life and in the classroom, learners use
more than one skill to complete communicative tasks. To order something in a restaurant
we need to listen and speak, to take notes in a classroom we need to listen and write.
Integrated skills questions test, how well learners can use the skills they have learnt and
practised in the classroom and used in real life.
Test Design
Benchmark Test is designed specifically to measure progress in language proficiency. The test
construct is based on actionable learner outcomes as embodied in can-do statements in the
Common European Framework of Languages and the Global Scale of English General learning
objectives. It is built on the body of applied linguistic research of the last 50 years which
prioritises the ability to use language in context rather than just knowledge of the language. To
use language effectively it is assumed that learners require certain knowledge of the systems of
language such as grammar, vocabulary, and phonemic systems.
The test explicitly measures language as a unitary trait. It also provides a breakdown of
sub-scores for the convenience of users to provide some insights into the relative strengths
and weaknesses of students.
The test suite contains 4 tests: Test A, Test B1, Test B2 and Test C. Test A assesses at
CEFR A1 and A2, and Test C at C1 and C2. The tests use fixed, linear forms. The total time
of the test is approximately 45 minutes, although this may vary slightly according to level.
Each test has three parts:
• Part 1 - Part 1 tests Grammar, Vocabulary and Reading. There are 8 item types in
this section.
• Part 2 - Part 2 assesses Speaking and Listening. There are 7 item types in this
section.
• Part 3 - Part 3 assesses Writing. There are two item types in this section.
Test Development
The questions in Benchmark Test have been developed by international teams of writers
who are very experienced in writing assessment questions. Teams are based in the UK,
Australia, the USA, and Hong Kong. All questions have been tagged with a Global Scale of
English (GSE) level and linked to a ‘can do’ statement.
Once written, all questions are reviewed by the teams in the different countries. Comments
and suggestions for improvement are stored with the test questions on a secure database.
The questions then go through a further review by an expert panel and decisions are made
on the quality of the questions, which to keep and which to reject. All questions are then
thoroughly checked by Pearson staff and images and high-quality recordings are added to
complete the questions before they go forward to be calibrated in a large-scale field test.
After the field testing, further checks are made on item quality based on the measurement
characteristics of the questions. Questions are eliminated from the item pool if they are too
easy or too difficult, if weaker learners get them right but stronger learners get them wrong,
or if they show any bias. These checks then result in a bank of the best quality questions.
Questions are selected from this bank to go into the final tests .
Field Testing
As part of the test development process, a large field test, conducted in two phases, was
carried out to ascertain the appropriateness of the pool of items and to serve as a source
for constructing individual test forms which would allow reliable predictions of students’
ability in English. A portion of the data collected was transcribed and rated which was used
to train automated scoring systems.
Field test forms were created using a linking approach. That is, the forms were linked
together with sets of items that appeared on all forms. Also, during the second phase of
data collection, since most candidates took two tests, the field test forms were also linked
through candidates. Learners and L1 English speakers were recruited to participate in the
field test. A total of 13,073 tests were submitted during the two field test phases. The
demographic for Benchmark is upper secondary and young adult. Most participants were
aged 16 to 35. Participants were from 96 countries. The countries with the largest number
of participants included Saudi Arabia, Poland, Panama, Ecuador, The Netherlands,
Argentina, Brazil, Spain, Guatemala, Japan, and Thailand. As an incentive to participate,
students received a year’s free access to the Longman Dictionary of Contemporary English
Online (LDOCE). L1 English speakers were offered an Amazon voucher.
Validity Evidence
Test Reliability
Reliability is one aspect of validity - if a candidate took a test on multiple occasions, would that
person get a similar score each time? During field testing, many candidates took two tests in a
short period of time. The two tests were made up of different items. Presumably, little or no
learning occurred between these test administrations, so the correlation of the scores from these
two tests should provide a good estimate of test reliability, known as test-retest reliability. The
higher the observed correlation between the two test administrations, the more reliable the test
scores are. In the observed field test data, after removing test data from candidates who either
did not answer a sufficient number of items, or who got extreme
Once the automated scoring system was developed, the responses from the validation set were
run through the same psychometric model to produce an Overall and six skill scores for each
candidate. Those human and machine scores were then correlated to compare how similar
those two kinds of scores are for each person. When candidates were identified as having
extreme scores (i.e., well outside the reported score range of the GSE and not well estimated),
or when they had fewer than five responses which were able to be scored in a skill area, their
scores were excluded from the analyses. This reduced the n-count for the Overall score
correlation to 288 candidates. The relationship between machine and human Overall scores was
found to be a very strong one with a correlation of .97 (see Table 1).
Table 1. Correlations between scores using machine and human scoring methods for
Overall and skill reporting areas.
Overall .97
Listening .93
Speaking .83
Reading .90
Writing .99
Grammar .97
Vocabulary .93
Machine scoring produces scores that are nearly identical to those that a careful human
rating process for many item types would require (see Figure 1).
Figure 1. Scatter plot of GSE scaled scores for validation set candidates using human and
machine scoring methods.
The performance summaries, recommendations and recommended activities form part of the
test reporting, together with overall and skill scores. Enabling skills (Vocabulary, Grammar) are
reported on in the performance summaries and recommendations where relevant but are not be
given a specific GSE score. Reporting is provided at the individual and group level.
The test and reporting are course agnostic, but specific courseware activities related to the
recommendations are provided for selected ELT courseware
Test Questions
What kinds of questions are in the test and what do they measure?
The test has several different question types. This gives learners a chance to demonstrate
their English skills in different ways. There are questions where learners choose the correct
option or where they write the answer into an open question. There are questions where the
learner repeats or copies what has been said as well as questions where learners describe
something or write a short essay. The questions are similar to the questions and tasks
learners will have done in the classroom as part of their learning and so should be familiar.
Not all question types appear at all levels.
Vocabulary questions
There are three vocabulary question types. Vocabulary is also tested as part of Describe
Image, Short Essay and Recall a Passage which are integrated skills questions.
Grammar questions
There are two grammar question types. Grammar is also tested as part of Short Essay and
Recall a passage.
Item type What do the learners have to do? What is being tested?
Choose the This question asks the learner to This question tests the knowledge
Right Word or choose the correct word or phrase to of grammar of the learner. It tests
Phrase complete several sentences. The the range of grammatical
sentences are related by a similar knowledge as well as the accuracy
theme. of grammar in a written context.
Correct This question asks the learner to This question tests the knowledge of
the choose the correct word or phrase to grammar of the learner in a written
Mistake replace a mistake in a sentence context and whether they can
choose the right grammatical form in
a
sentence.
Reading questions
There are three reading question types. Reading is also tested as part of Read aloud, recall
a passage, and Listen and Read which are all Integrated Skills questions.
Item type What do the learners have to do? What is being tested?
Choose the This question asks learners to read a This question tests the global
Right Picture short text and select the best picture to understanding of short
match with the text. messages,
notes and short pieces of writing.
Short This question asks the learner to read This question tests the reading
Answer a longer text and answer questions on comprehension of the learner. It tests
Questions the text. specific information included in the
text.
Choose the This question asks learners to read a This question tests the global
Right Word or short text and select the best word or understanding of short
Phrase (gap fill) phrase to complete the text. messages, notes and short pieces
of writing.
Listening questions
Listening is tested as part of Listen and Repeat, Story Retell, listen to the conversation,
Listen and Write (Dictation), and Listen and Read (Hotspots) which are all Integrated Skills
questions.
Writing questions
There is one question type which tests only writing. Writing is also tested as part of Listen
and Write and Recall a passage which are Integrated Skills questions.
Item type What do the learners have to do? What is being tested?
Short Essay This question asks the learner to write This question tests global writing skills.
a short essay in response to a prompt. It tests paragraph and sentence
For lower levels, test takers need to structure, the range and accuracy of
write a short description of an image the language used, the ability to
structure an argument or discussion in
a written context. It tests grammar and
vocabulary as an essential part of
writing.
Item type What do the learners have to do? What is being tested?
Read Aloud This question asks the learner to This question tests accurate
read aloud a sentence or short text. pronunciation and how fluent the
learner is at speaking. It tests if the
words in the text are understood and
repeated accurately.
Listen and then This question asks the learner to This question tests listening
Write (Dictation) listen to a sentence or short text and comprehension at the word and
write what they have heard. sentence level. It tests the ability
to write accurately and understand
sentence structure, word order
and
connectors.
Listen and Repeat This question asks the learner to This question tests listening
listen to a sentence or short text and comprehension at the word and
then repeat it sentence level. It tests pronunciation
and fluency. It tests if the words
heard are understood and repeated
accurately
Read and then This question asks the learners to This question tests reading
Write read a short story or short piece of comprehension. It tests the ability to
factual text. The text then disappears, write accurately and understand
and the learner must reconstruct the sentence structure, word order and
text. connectors.
Listen and This question asks the learner to read This question tests reading and
Read (Hotspots) a text and at the same time listen to the listening comprehension. It tests the
text. The learner must find the ability to recognise individual words in
differences between the written text a text.
and the spoken text.
Story Retell This question asks the learner to listen This questions tests listening and
to a short narrative and then retell the speaking. It assesses understanding
narrative using their own words. of a short narrative
Supplier Response Template (annex to RFP/ITT) – 26 February 2019 18
Listen to the This question asks the learner to This question tests listening
Conversation listen to a short conversation and then comprehension. It tests the accuracy
answer a question about the of the listening comprehension of the
conversation. learner
Sample test
Learners can take the unscored sample test to familiarise themselves with the question
types in the test. There is a sample test for every Benchmark level as question types and
the level of difficulty vary. The sample test is approximately half the length of the full test
and contains examples of all the item types of students will attempt in the scored version of
the test. Teachers should run through the sample test the first-time students take a test at a
particular level.
Reporting
Group Report
REPORTS AND SCORES
The group report provides information about the set of students you have selected including
overall performance and each skill. It is generated in the Overview tab on Test Hub. The
default view shows results for all test levels, unless selected otherwise.
Note:
An asterisked overall score means that an individual skill score was flagged as NS or BL.
NA as an overall score means that there were 2 or more BL/NS scores or a combination of
NS and BL scores.
Test Scores
The Benchmark Test assesses the test-taker's performance across each skill and gives
them a level on the GSE General scale. The score means they have demonstrated that
they can perform certain tasks at this level. The number is accompanied by some
information describing what they were able to do (Performance Summary) and areas for
them to work on (Recommended Activities).
In most cases students will receive numerical scores, but in certain circumstances you may see
different results as shown below. In each case advisory notices are provided in the report.
BL - Below level the student scored below the GSE range of the test, so they have not
received a score for that skill.
NS - Not scorable the responses could not be scored by the technology. This can occur
because students did not answer enough questions. It is also given if their recorded spoken
answers could not be heard clearly. This happens if there is background noise, the student
mumbled, spoke too softly or loudly, or was speaking in a language other than English. Using
Overall score This is the overall GSE score for the test-taker, based on their performance
in the test across all the skills.
Asterisked overall score If test-takers receive a BL or NS for one of the skills, their overall
score is asterisked to highlight that one skill score was not reported and has not been
included in the overall score calculation.
NA - Not applicable This only applies to an overall score. If test-takers receive two or more
BL/NS scores (or a combination of BL and NS), they will see NA instead of an overall score.
This is because it is inaccurate to calculate a score from a reduced number of skill scores.
The Performance Summary report summarises the student’s performance relative to the
performance descriptors for each skill/proficiency area articulated in the GSE Assessment
Framework rather than on their performance on specific curriculum learning objectives.
The Suggested GSE Learning Objectives relate to aspects that may not specifically have
been tested but relate to the skill/proficiency area at the same level so may also be
beneficial to work on.
The Benchmark Test assesses a representative cross-section of skills and proficiency traits
to identify average performance and general areas of strength/weakness. It is sufficiently
robust to reliably inform teaching and planning, but if instructors want absolute GSE levels
for their test-takers then they need to enter them for longer tests which have more items at
each level and test every sub-skill.
System Requirements
To successfully use Test Hub, please ensure your devices meet the following
system requirements.
Note: Tests can only be taken on computer devices.
Bandwidth Requirements
The table below shows the recommended bandwidth when users located in the same room
are using the platform at the same time on a shared dedicated internet connection.
To avoid performance issues, check with your school's local Information Technology
(IT) support team if you are unsure of your available bandwidth.
If taking a test that assesses Speaking it is especially important to ensure the headset has a
microphone boom to reduce the impact of background noise such as other test-takers'
voices and air conditioning.
On-Site Testing
If delivered on-site tests must be administered in a quiet, distraction-free room to help test-
takers focus. Excess noise, or the sound of instructors and other test-takers’ voices will
spoil recorded answers so they cannot be accurately scored.
To prevent this, test-takers must sit 10ft/3m apart in each direction (minimum distance
of 6ft/2m). Desktop privacy partitions will also reduce distractions.
Avoid gyms, halls, rooms with loud air conditioning, and hard floors or high ceilings as they
can cause echoes which can reduce the clarity of recorded spoken answers. Keep
windows closed if there is loud traffic or building work next to the classroom.
Ideally, test-takers should be tested in small groups to avoid noise interference and
distractions, and to make it easier for instructors to provide them with support.
Remote Testing
If delivered remotely, tests must be taken in a quiet, distraction-free room to help test-takers
focus.
Excess noise loud music, appliances, ambient sounds, ringtones, pets, construction tools,
etc, or the sound of other people’ voices will spoil recorded answers so they cannot be
accurately scored.
To prevent this, test-takers must choose the most appropriate time and place for this (e.g.,
night hours tend to be more silent and quieter than during daytime). If at home Test Takers
must warn their relatives and/or roommates to avoid making noises that may disrupt the
testing environment.
Avoid gyms, halls, rooms with loud air conditioning, and hard floors or high ceilings as they
can cause echoes which can reduce the clarity of recorded spoken answers. Keep windows
closed if there is loud traffic or building work next to the place where the test is taking place.
Test takers must be properly “trained” by test admins, so candidates consider all good
delivery practices with a view to having a successful test. This includes appropriate internet
connection, software, hardware, etc.
AWARENESS
During the implementation of the test; the users will receive a one-hour
session of awareness in which they will receive general information
about:
NOTE: By the time they receive an invitation and it is not clear for
them what to do, they can follow the next link for more
information: Receiving an Invitation.
With the following diagram, methodology will be explained step by step with participants.
As there will be two testing moments, the data analysis will occur as follows:
Entry Test
1. Candidates take the entry test.
2. Individual PDF score reports download.
3. All candidates result downloaded via Excel worksheet.
4. With all data downloaded, the existing candidate’s info (names, last names, age,
gender, grade, school, district, city, etc) can be unified with their results (Overall
level and levels per skills).
5. The before mentioned data will allow to identify, compare and/or contrast the
English level (Overall level and levels per skills) of the test takers along with their
personal, group and/or location data among others. The following table shows an
initial Overall Level sample data from the entry test which can be later analysed and
itemised to several categories:
Exit Test
1. Candidates take the exit test.
2. Individual PDF score reports download.
3. All candidates result downloaded via Excel worksheet.
4. With all data downloaded, the existing candidate’s info (names, last names, age,
gender, grade, school, district, city, etc) can be unified with their results (Overall
level/Score and levels per skills,).
5. The before mentioned data will allow to identify, compare and/or contrast the
English level (Overall level and levels per skills) of the test takers along with their
personal, group and/or location data among others. Also, as this test is more
profound and specific to a CEFR band, the exit test result can reveal progress in
comparison with the Entry Test result; of course, some very important criteria from
the GSE theory and research must be met (see How long does it take to learn a
language? Insights from research on language learning.
Page 9). The following graphic/table shows a sample data analysis where an entry
test result is compared to an exit test which can be later analysed and itemised to
several categories:
Pearson will assign a dedicated team as follows; Brief CVs with the information
associated are appended to this document:
Project Cordinator
(Cesar Hortua)
Experience - [30%]
ID % Requirement
EX01 [30%] Recent and relevant experience from both the team and other customers in supporting
and managing platforms for virtual course development, management, online material
usage, application, and testing development.
Supplier Response:
Pearson’s team has extensively experience in managing different projects with different
products and tests based on customers’ needs. Moreover, professional staff needs to be
trained in current education trends when owning different platforms, books, online material,
and tests. As we implement different tests around different places around the world, staff’s
experience includes the following items:
- Teleperformance: English Assessment services for more than 50.000 test per
year.
- Universidad EAN: Apply more than 3.000 test per year to students.
- Educar Futuro: Apply diagnostic test for 3.810 students for Public Schools.
- Universidad Tecnologia de Monterrey: Apply not only level test but also
International Certifications such as Pearson International Certificate, which we are
offering as our extra value to our proposal.
Price – [15%]
ID % Requirement
moments (entry and exit). The project will have one Project Coordinator and two
assistants that will oversee:
- Set an implementation plan based on the dates and needs of the users.
English International Certificate. This test will be given to the students who demonstrate
the best improvement between the first and the second test.
Characteristics of the test:
Computer-based Testing
PTE General/Pearson English International Certificate
Overview
Background
The Pearson Test of English General (PTE General), soon to be known as Pearson
English International Certificate, is an assessment solution for six levels of proficiency
that assesses and accredits general English ability.
Competitive advantages
Enhanced Security
Faster results
Important Note: Failure to provide all mandatory documentation may result in your submission being
rejected.
Submission Checklist
Document Y/N
1. Confirm acceptance of the Terms and Conditions of this RFP/ITT and of Annex 1,
including any changes made via clarifications during the tender process. Y
I confirm on behalf of the supplier submitting the documents set out in the above checklist that to the best
of our knowledge and belief, having applied all reasonable diligence and care in the preparation of our
responses, that the information contained within our responses is accurate and truthful.