Professional Documents
Culture Documents
Voice Controlled Intelligent Wheelchair: October 2007
Voice Controlled Intelligent Wheelchair: October 2007
net/publication/4307102
CITATIONS READS
39 9,518
3 authors, including:
Takeshi Saitoh
Kyushu Institute of Technology
48 PUBLICATIONS 502 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Takeshi Saitoh on 21 May 2014.
Abstract: In order to assist physically handicapped persons, we developed a voice controlled wheelchair. The user can
control the wheelchair by voice commands, such as “susume (run forward)” in Japanese. A grammar-based recognition
parser named “Julian” is used in our system. Three type commands, the basic reaction command, the short moving
reaction command, and the verification command, are given. We experimented speech recognition by Julian, and obtained
a successful recognition rate 98.3%, 97.0% of the movement command and the verification command, respectively. The
running experiment with three persons was carried out in the campus room and the utility of our system is shown.
- 336 -
PR0001/07/0000-0336 ¥400 © 2007 SICE
Table 1 Voice command and reaction.
START
command reaction (mode)
⃝
1 tomare stop input reaction command
⃝
2 susume run forward
⃝
3 sagare run backward recognition
⃝
4 migi turn right / rotate right (reaction command)
⃝
5 hidari turn left / rotate left
⃝
6 sukoshi-susume run forward about 30cm show result word
⃝
7 sukoshi-sagare run backward about 30cm Yes
⃝
8 sukoshi-migi rotate right about 30 degrees stop command
⃝
9 sukoshi-hidari rotate left about 30 degrees No
Yes
⃝
A OK / yes acceptance command no input for 3 seconds
⃝
B torikeshi / no / cancel rejection command No
input verification command
recognition
(verification command)
No
OK?
Yes
reaction
- 337 -
ex.1 A user inputs stop command to stop in the target
position by going straight running.
2 ex.2 A user inputs stop command in the target position
1 by going straight running.
ex.3 A user makes a wheelchair go straight at the time
5 of the stop, immediately he inputs stop command.
The stopping experiment was carried out with the five
3 persons. Each person was tested five times. Thus, aver-
age value and standard deviation were calculated. Exper-
imental results are shown in figure 7.
4
As for ex.1, average error was 11.7cm. However, we
found that increasing number of time of the experiment
decreases the error.
From two experimental results of ex.2 and ex.3, the
Fig. 4 Main window.
braking distance of about 2m occurred until it actually
stopped after stop command was input. This is the reason
why two seconds are necessary for the recognition pro-
cessing and the display of the result. The running speed is
1.8km/h, and our system runs 2m in 2 seconds. This dis-
⃝ 2 susume ⃝
1 tomare ⃝ 3 sagare ⃝
4 migi ⃝
5 hidari
tance supposed with the specifications of the voice con-
trolled wheelchair though it can expect that the braking
distance decreases by making improvement in the perfor-
mance of the laptop and the low running speed. Here, the
braking distance was about 50cm when button operation
is used without using the voice input.
6 sukoshi- ⃝
⃝ 7 sukoshi- ⃝8 sukoshi- ⃝ 9 sukoshi-
3.3 Running experiment in the corridor
susume sagare migi hidari
The next experiment is a running experiment. This ex-
Fig. 5 Mode illustrations. periment was carried out with same five persons of pre-
vious experiment. A running place was in the corridor
of campus, and the width of the corridor was about 2m,
4. Quit button: When a button is pushed, the control and one obstacle was put on the hall of the corridor. We
system is quit. set two courses A and B as shown in figure 8. The total
5. Control buttons: When one of the buttons is pushed, distances of course A and B were about 16m and 13m,
the wheelchair is moved. That is, a user can control respectively.
the system without voice command. Furthermore, The experimental running time, the number of basic
the present mode is highlighted. reaction command and the number of short moving reac-
tion command are shown in table 2. Because a little mov-
3. EXPERIMENT ing distance operation of the course B is more necessary
than the course A, there is more input of the short moving
3.1 Recognition experiment reaction command in the course B. Though the person E
To evaluate recognition performance of Julian, we ex- is one of the authors and he is control the system in a
periment speech recognition test with 15 students. The practiced operation, every person is almost running at the
target words are nine reaction commands and five verifi- same time.
cation commands as shown in table 1.
This experiment is carried on in laboratory room. 3.4 Running experiment in the room
There were some voices of other people in the recording The next experiment was carried out with the three
environment in the circumference. As the results, we ob- persons. A running place was in the room of campus. We
tained successful recognition rates of 98.3% of reaction presume the course which the start point is near the door
command and 97.0% of verification command, respec- and the destination point is one of the seats. The distance
tively. between the desks was about 160cm. The width of the
wheelchair was about 55cm. So, this experiment is a run-
3.2 Stopping experiment ning experiment in the narrow running course. To eval-
The action of this system executes until the next com- uate performance of our system, we experiment not only
mand is give. For example, a wheelchair goes straight voice operation but also the key operation of the laptop.
until stop or turn command is input. Then, we tested the The experimental running time, the running distance and
following three experiments to verify the operation of our the number of reaction are shown in table 3. The running
system. The overview of these experiments are shown in scenes of C are shown in the figure 10.
figure 6. Unlike the previous experiments in the corridor, the
- 338 -
1000
stop line start line
900 goal
tomare
susume
800
ex.1 700
600
tomare course A
500
y [cm]
susume
course B
400
ex.2 obstacle
300
200
tomare
susume
100
ex.3 0
goal start
-100
-700 -600 -500 -400 -300 -200 -100 0 100 200 300
x [cm]
Fig. 6 Overview of stopping test.
Fig. 8 Test courses in the corridor.
100
Table 2 Experimental results of running experiment in
the corridor.
50
(a) Course A
distance [cm]
B 177.43 16 8
200 C 158.68 17 6
D 185.46 9 15
150 E 156.33 14 10
average 168.30 15.6 9.4
100
A B C D E all
person
(b) ex.2 difference in running time by the user occurred. The per-
son C is one of the authors, and he controls the system in a
200 practiced operation. On the other hand, key operation was
almost the same running time as every person. Though it
150 had a difference individually, we confirmed that our sys-
distance [cm]
tem could run even through the narrow space, like the
100 room.
50 4. CONCLUSION
This paper developed a voice controlled wheelchair.
0
A B C D E all Three type commands, the basic reaction command, the
person short moving reaction command, and the verification
(c) ex.3 command, are given in our system. Speech recognition
used an open-source software Julian. We obtained a suc-
Fig. 7 Result errorbars of stopping experiments. cessful recognition rate 98.3%, 97.0% of nine reaction
commands and five verification commands, respectively.
- 339 -
600
Goal A
B
500 C
400
300
y[cm]
100
Start
0
−100
−600 −500 −400 −300 −200 −100 0 100
x[cm]
- 340 -
View publication stats