Professional Documents
Culture Documents
System Pilot
System Pilot
Pilot means:-A system for evaluating new procedures for handling data in which a sample that is
representative of the data to be handled is processed.
A pilot is a way of testing a theoretical model on a small scale level, in order to discover potential problems
that otherwise would not be detected until full-scale deployment. If these potential problems are not
detected on time, it can cost a lot of money and time to introduce changes once the solution has been
deployed to more schools. Pilots’ are very different from the first stage of a progressive deployment and
demand a specific type of planning.
'Piloting' of an ICT project is defined as the implementation of an ICT technology, software, or related
project on a small controlled scale to allow for its full impact, benefits and weaknesses to be evaluated
before implementation on a regional or nationwide basis.
Why are pilots important?
• Before investing in a large-scale project, testing its assumptions on a smaller scale can leave us better
equipped to plan and execute for the larger scale deployment.
• We can reduce the risk of propagating mistakes by detecting errors at the pilot stage.
• Pilots can be used to assess the impact of the technology on the schools, the people, the community, and
whether equipment is used effectively by students and teachers, etc.
• It is easier to secure funding for a pilot than a large-scale deployment.
• The project team members can gain more experience before engaging in a more demanding project.
And also used to compare two or more similar solutions in order to find out which one works best in the
field.
Why Did We Conduct a Pilot Study?
Piloting reduces the risk of rolling out a flawed process, procedure or other solution component to broad
multi project environments.
The idea behind a pilot is to test the solution component within a bounded and controlled environment
before the component is sanctioned for broader use.
During a pilot study, the usability of the solution component is evaluated in a near real-world project
setting.
Experience demonstrates that such a test always exposes improvement opportunities that can be exploited
to hone and refine the solution component before broader dissemination.
1
Develop Pilot Study Success Criteria
2
• Pre-analysis: final selection of the sample schools where the pilot will run. Analysis of the school
infrastructure that is in place and execution of required adaptations if necessary (i.e. buildings, classroom
infrastructure, specific furniture, electricity provision).
• Set-up: acquisition, transportation, installation and configuration of the equipment and/or software.
• Project presentation: several informative talks with people directly and indirectly involved like teachers,
headmasters, parents, community members and students about the objectives of the pilot, how long it will
last and how to record the experience, etc.
• User training: Training users to use the specific tools.
• Execution:
• End of pilot:
During the earliest stages (Assessment or Pre-Assessment), to deliver proof of concept. At this
stage you are not answering “How does KM work for us”, rather seeking to answer “Would KM
work for us at all?”
During the Selection phase, in order to test specific KM tools and techniques, to answer the
question “Would this KM tool or process form part of our KM framework? If so, what modification
would it need?”
In order to validate and perfect the KM framework. At this stage you are seeking to ask the
question “What changes do we need to make to our KM framework before finalising it for roll-
out?”
During the planning phase of the deployment project, product management, program management, and
release management teams collaborate to create the pilot plan. The pilot plan defines the scope and
objectives of the pilot and identifies pilot participants and where the pilot will be conducted. It includes a
schedule for deploying and conducting the pilot and plans for training and communicating with pilot
participants, evaluating the pilot, identifying risks and contingencies, and other key activities that occur
during a pilot deployment.
When the pilot plan is ready for review, have project team members, necessary support personnel, and
management representatives read and approve the plan. Be sure that the supervisors of everyone directly
affected by the pilot have a chance to review the plan. For example, if the schedule allots time for a
particular user group to participate in the pilot, have the supervisor of that group review the schedule.
3
Defining the Pilot Scope and Objectives
The first step in planning your pilot is to define what you want to accomplish (the objectives, or goals) and
what you plan to include and exclude (the scope). Be sure to align the pilot objectives and scope with those
for the deployment project as a whole, as defined in the master project plan. Ensure that the pilot plan
includes an opportunity for the team to evaluate features identified in the project plan as high priority, to
ascertain that they successfully meet all of your business objectives.
Explicitly state the objectives of the pilot. Use the objectives to identify criteria for measuring the success
of your pilot. Many organizations have primary pilot objectives, such as:
Build user support for the Windows Server 2003 deployment project.
Meet the baseline requirements for functionality that were established in testing.
You might need to define objectives that are related to specific technologies, such as Active Directory, or
to your network infrastructure. If you plan to conduct multiple pilots, define the objectives for each.
Define the scope of the pilot by clearly stating which services and features will be included and which will
not. When you list the services and features you plan to include in the pilot, also state how you expect them
to perform. Describe the areas of functionality that the pilot implementation affects, and note to what extent
they are affected, and in which situations they are affected.
4
Do not expect to test every feature or service during the pilot. Focus on processes that present the greatest
risk and events that are most likely to occur. Prioritizing the features to be tested in the pilot is particularly
important if your team plans to conduct multiple pilots. When multiple pilots are planned, start small and
gradually increase the scope of successive pilots.
If certain aspects of your design cannot be covered by the pilot, describe them. For example, if your
organization plans to upgrade your domain using your existing architecture without any restructuring, and
then restructure the architecture later, you might choose to exclude the restructuring process from the first
pilot.
Also specify the duration of the pilot, in terms of either time or of the criteria to be met.
Be sure to describe how you expect to proceed after the pilot is complete. If you plan to keep some
functions in place and remove others for the full production rollout, identify the features that will be
removed. For example, if you are redesigning your namespace, you might want the option of changing it
after the pilot concludes. Indicate how to back out features that will be removed.
Pilot selection
A pilot project should be selected around an area of business need. The business need leads the way; the
KM pilot provides one or more possible solutions which can be tested. These are some of the areas where
you might consider suggesting a knowledge management pilot.
If there is the business critical activity that is new to the organization, then rapid learning will
deliver business benefits. If it is new to only one part of the organization, then transferring learning
from where it has been done before, will give huge benefits.
If there is repetitive activity, where continuous improvement is needed, then knowledge
management can help drive down the learning curve
If there is activity that is carried out in several locations, where performance level varies, then
knowledge management can help exchange knowledge from the good performers, to improve the
poor performers.
Finally if there is an area of the business which is stuck due to lack of knowledge, then knowledge
management can help develop the knowledge needed to get unstuck.
Pilot Ranking
If you have a selection of pilot areas, you need some way of ranking these opportunities so that you select
the best one. It is very unlikely that you will be constrained by a lack of opportunity, it is far more likely
that you will be constrained by a lack of resources to deliver the opportunities. So you need some form of
ranking criteria.
These are some suggested criteria which we often use;
o if the project is successful, can we measure the difference, or the value generated?
o Is there is strong support from management?
o If we create knowledge, is it purely for the pilot team or can others use it?
o Finally, can we practically do it in the timeframe with the resources available?
You can imagine that, if you can answer yes to although these questions, you have chosen a good pilot
project.
5
Organization of the Pilot Project
The pilot project should be owned by the business, with KM providing support rather than leadership.
There are three main roles in the pilot project
1. A business sponsor, who provides resources and agrees the goals
2. A leader for the pilot project, who should be someone from the relevant business unit or function
Once you have decided on a pilot opportunity, then you need to think about the different phases of the pilot
project.
1. Initially you need to raise awareness in the target area, and may need to do some “selling” of the concept
to get people on board.
2. Then you need to scope the project to determine what time and resources are needed,
3. You need to tailor a local knowledge management system (a combination of roles, technologies,
processes, activities and governance) that will fit the working patterns of the project team,
4. You need to embed the knowledge management processes and activities into the process of the business,
Activities related to pilot projects can be divided into three distinct phases:
Preliminary -
Define the purpose, goals, objectives, and scope of the pilot/proof of concept demonstration project
Establish the success criteria for the pilot, with input from all stakeholders, technical staff, records
management staff, and users
Determine whether preliminary decisions and assumptions made regarding hardware and software
performance, as well as service level required by technical staff, were accurate
Assess hardware and software, system and database design, and procedures (for training,
scheduling, system management, and maintenance)
7
Test product(s) in dissimilar locations (e.g., in terms of RM and IT support delivery) for
functionality, usability, and benefits derived from using ERM
An agency should acknowledge that some prudent(acting) risk-taking is necessary when it comes to
adopting new technology and changing business processes. To minimize the risks associated with a pilot
launch, the project team should:
Involve and continually encourage pilot project participants to use the system
Expand the pilot through incremental rollout to other areas of the agency and inclusion of other
record formats
Assure that pilot's requirements are measurable and clearly understood by participants.
8
Certain critical decisions need to be made and documented before the pilot begins. This can only be
accomplished by reviewing similar projects, determining whether any additional data is required before
proceeding, and considering which performance data need to be collected through the pilot to enable
meaningful evaluation.
A pilot monitoring system that consists of service level requirements for the pilot (e.g., data load,
update, refresh) and a problem log to note any disruptions in service that occur during the conduct
of the pilot that includes what was done to address each situation.
Availability of analysts to identify and test potential business process improvements and measure
their impact on the agency as well as budget analysts to accurately assess pilot costs and adjust
predicted estimates for full-scale implementation.
Training is essential for all involved in the pilot project. You may need to reinforce agency staff's
understanding of basic records management by:
Evaluation is perhaps the most important part of the pilot project. A carefully constructed pilot project will
make provision for objective analysis of the results and an assessment as to how to proceed with full
deployment. The evaluation team for one agency pilot project identified five categories of performance
measures:
Installation: Time to install on the network, test, and install on user workstations
Training: Ready availability of training; keeping users well informed about training opportunities;
providing assistance in registering for training; conducting well-organized and understandable
training sessions; follow-up after training
Usage: Streamlined procedures and the use of templates; meetings to increase comfort levels of
users and to develop work-specific file plans; application effectiveness/user satisfaction;
privacy/security issues adequately addressed
The mechanisms designed into the project to monitor the progress of the pilot will inform the evaluation.
These include:
Communications/knowledge transfer mechanisms that you have set up for your pilot project,
serving as a source for valuable feedback necessary for adequate analysis.
Minutes of telephone and Web-based conferences with pilot participants, as well as technical team
meetings, providing additional input for the evaluation.
A formal approach to quantitative and qualitative analysis of the pilot project should be built into the pilot
project plan. The methodologies employed can include a mix of surveys and interviews with participants
conducted periodically, including:
An initial baseline analysis will help you to understand the concerns of participants, giving you an
opportunity to address them through pilot trainings and any communications mechanisms you
established for the pilot.
Interim assessments can evaluate the effectiveness of particular aspects of the pilot (e.g., training
workshops). These can gauge changes in usage of the system (increasingly frequent usage with less
time required per session) and user satisfaction (as the pilot team responds to requests from
participants to modify the system/procedures).
A final evaluation that demonstrates the effects of the business process and indicates changes to be
made before the projec is deployed agency-wide.
10