Professional Documents
Culture Documents
Development of A Modular Real Time Shared Control System For A Smart Wheelchair
Development of A Modular Real Time Shared Control System For A Smart Wheelchair
Development of A Modular Real Time Shared Control System For A Smart Wheelchair
https://doi.org/10.1007/s11265-022-01828-6
Abstract
In this paper, we propose a modular navigation system that can be mounted on a regular powered wheelchair to assist disabled
children and the elderly with autonomous mobility and shared-control features. The lack of independent mobility drastically
affects an individual’s mental and physical health making them feel less self-reliant, especially children with Cerebral Palsy
and limited cognitive skills. To address this problem, we propose a comparatively inexpensive and modular system that uses
a stereo camera to perform tasks such as path planning, obstacle avoidance, and collision detection in environments with
narrow corridors. We avoid any major changes to the hardware of the wheelchair for an easy installation by replacing wheel
encoders with a stereo camera for visual odometry. An open source software package, the Real-Time Appearance Based Map-
ping package, running on top of the Robot Operating System (ROS) allows us to perform visual SLAM that allows mapping
and localizing itself in the environment. The path planning is performed by the move base package provided by ROS, which
quickly and efficiently computes the path trajectory for the wheelchair. In this work, we present the design and development
of the system along with its significant functionalities. Further, we report experimental results from a Gazebo simulation and
real-world scenarios to prove the effectiveness of our proposed system with a compact form factor and a single stereo camera.
Keywords Assistive Technology · Human-machine Systems · SLAM · Rehabilitation · Smart Wheelchairs · Wheeled
Mobility
13
Vol.:(0123456789)
Journal of Signal Processing Systems
13
Journal of Signal Processing Systems
of 24 sonar sensors for obstacle detection (1-meter range), a and Lidar sensors. This will allow for an easy integration
bump sensor which is a simple contact switch for detecting and a modular system thus streamlining the conversion of a
collisions, and a laptop that was used to process the data regular powered wheelchair to a smart wheelchair.
received by the National Instruments data acquisition card The main hardware used is the NVIDIA Jetson Nano (https://
from the sensors. At a first glance, this system seems to be developer.nvidia.com/embedded/jetson-nano-developer-
robust for navigating a wheelchair but comes with its own kit) which has sufficient processing power (with a graphi-
flaws. However, multiple blind spots were created due to cal processing unit) to enable real-time task planning. To
interference in the signals received by the sonar sensors. eliminate the bulkiness of the system, various active sensors
The entire processing was done on a laptop which is not only required for different tasks are replaced with a stereo camera
bulky but also consumes a lot of energy. setup, thereby making it compact and inexpensive. Visual
Kuno et al. [7], implemented a system to detect face and odometry plays a very crucial aspect in our system since the
hand gestures to control the wheelchair which is a useful mapping and localization are done using visual feedback
feature for people with severe motor functional problems. rather than opting for the bulky and expensive Lidar sensors
A computer with a real-time image processing board was or wheel encoders.
used along with 16 ultrasonic sensors to navigate through The robot operating system is used as the base platform
the environment including video cameras to detect ges- on which other applications run. ROS is widely used in
tures. To avoid false gesture detection, the video camera the robotics community for its flexibility and features. It
feed was used along with ultrasonic sensor data to prevent provides compatibility with various hardware and software
the wheelchair from turning into an obstacle. Even though hence, providing an easy setup to interface with hardware.
the system seems to be useful for patients with severe motor The proposed system, as shown in Fig. 3, is based on the
disabilities, it is very specific to that specific category of ROS architecture with four nodes including (a) Interface
patients. Detecting face gestures has a lower success rate system node, (b) Image processing node, (c) Path plan-
since human gestures are complex and can be falsely inter- ning node, and (d) Motor controller system node. Each
preted in multiple scenarios. The whole system with ultra- of these nodes serves a specific purpose, and some nodes
sonic sensors, a video camera, and a personal computer is rely on other nodes to conduct their tasks. The specific
a very bulky setup. functionalities of each node are presented in the following
On comparing our work with the previously mentioned subsections.
work, our system uses Jetson Nano for performing com-
puter vision tasks which is an inexpensive and low-powered
solution. We make use of Realsense depth cameras thereby
eliminating the use of bulky and expensive LIDAR or Ultra-
sonic sensors [7] for navigation and obstacle detection. Since
our system aims to be a modular component of a powered
wheelchair, it facilitates easy installation for the user. Our
system also provides different modes of autonomy which are
discussed in Section 3.3.
In this work, our proposed system attempts to overcome
some of the flaws present in existing systems. Towards this,
a vision-based approach is used to perform obstacle detec-
tion and path planning. Our design provides shared con-
trol and is developed to be used for both pediatric and adult
populations.
3 Methods
13
Journal of Signal Processing Systems
13
Journal of Signal Processing Systems
Figure 6 The planned path for the given destination point. The blue
layer surrounding the walls represents the inflation layer in the cost
map to prevent the wheelchair from moving too close to the walls or
other obstacles.
13
Journal of Signal Processing Systems
4.1 Hardware
13
Journal of Signal Processing Systems
Figure 9 A detailed model of the wheelchair. The red circle high-
lights the RGBD sensor mounted on the wheelchair.
Figure 11 The costmap being updated based on the nearby dynamic
obstacle using the cloud point data.
It is powered by a compact 8 volt (V) power module
attached to the Nano’s base. Nvidia’s Jetson platform can left 24 V DC motor. These components are mounted on a
handle computer vision and path planning tasks in real- powered wheelchair.
time with minimal lag. The setup is equipped with an Intel
RealSense stereo camera module that provides with vision 4.2 Software
data required to map the environment. Two separate 43 A
24 V DC motor drivers are used to control the right and During the first startup, the robot is completely blind since
it has no map to localize itself. Hence, it is changed to
mapping mode wherein the robot moves around to map
the environment. Even when there is no map to give an
estimate of the nearby obstacle the RTAB-map provides
an API for obstacle point cloud using which we can map
the environment safely. When the environment is suffi-
ciently mapped it takes approximately the same path back
to where it started to perform loop closure. This allows
tuning out the irregularities in the map due to a drift in
the IMU sensor data.
Once done, the user can set it to the localization mode
after which the wheelchair can localize in the environment
using visual odometry.
13
Journal of Signal Processing Systems
Table 2 Simulation trials involving the wheelchair to plan and navi- Table 4 Simulation trials involving the wheelchair to plan and navi-
gate to goal location 2 with static obstacles. gate to goal location 1 with dynamic obstacles.
Trial No. Distance (m) Real Time Goal reached Remarks Trial No. Distance (m) Real Goal reached Remarks
Taken Time
(secs) Taken
(secs)
1 23.85 135 True -
2 24.20 133 True - 1 29.20 170 True -
3 23.50 140 True - 2 28.50 165 True -
3 28.40 250 False Collided with
obstacle and got
stuck
5 Experiments
5.1 Simulations
area. The obstacle cloud points provided by the RTABmap
Simulators are very powerful tools to test out prototypes API are processed to update the costmaps. The costmaps
in various real-world scenarios which might be difficult to are updated when there is an obstacle in the threshold value
physically recreate due to cost and availability concerns. which represents the distance of the obstacle from the
ROS provides a simulation package called Gazebo that can wheelchair. It is also able to clear areas of the cost map that
simulate various environments in real time [9]. previously had an obstacle but are free now.
The wheelchair system was simulated in a hospital envi-
ronment and a home-like environment obtained from AWS
Robotics’ open-source github repository as shown in Fig. 8 5.1.1 Quantitative Measures
to test out the capabilities. The system was able to map and
localize in its environment as well as to detect nearby obsta- To evaluate the effectiveness of the wheelchair’s control
cles and avoid collisions with them. system, simulation trails were performed with both specific
A realistic model of the wheelchair was modeled and goal points and random goal points. The robot moved with
imported into the Gazebo simulation environment as shown a maximum linear velocity of 0.5 m/s and maximum angular
in Fig. 9. It is driven by a differential motor driver provided velocity of 1.5 m/s. The start and end locations are selected
by the ROS package. An RGBD camera plugin is mounted manually such that it takes the following criteria under con-
in front of the wheelchair as highlighted in Fig. 9, and it is sideration. Firstly, it must not be a simple straight path but
aligned so that it would not obstruct the user’s view. must have turns around the corners (Fig. 11).
It is crucial to test the controller’s ability to navigate and should pass through a narrow region. secondly, It
through dynamic obstacles. The gazebo provides a plugin must have either a static or dynamic obstacle along the path.
to create dynamic obstacles and provide a trajectory to it. Thus we selected two paths considering the above criteria
The dynamic obstacles were placed in the environments and ran trials on that paths. The measured results are shown
as shown in Fig. 10, such that it mimics the behavior of a below (Tables 1, 2, 3, 4, 5, 6).
human being doing mundane things. It took an average time of 150.3 s to reach the goal at an
The path planning performed by the move base pack- average distance of 28.13 m.
age can be visualized in RVIZ. The path planning node It took an average time of 136 s to reach the goal at an
takes care of the integration of user input to the move base average distance of 23.85 m.
package. Both the local and global costmaps are updated to Out of the two successful attempts it took an average time
accommodate for the changes in the feasible path planning of 170 s to reach the goal at an average distance of 28.85 m.
Table 3 Simulation trials Trial No. Distance (m) Real Time Taken Goal reached Remarks
involving the wheelchair to plan (secs)
and navigate to random goal
points with static obstacles. 1 23.41 189 True -
2 15.39 202 True Struggled to avoid
colliding with
the walls
3 17.41 160 True -
4 25.23 192 True -
13
Journal of Signal Processing Systems
Table 5 Simulation trials involving the wheelchair to plan and navi- The hardware setup consists of a 24 V DC motor powered
gate to goal location 2 with dynamic obstacles. by 2 series connected 12 V battery with max current input of
Trial No. Distance (m) Real Goal reached Remarks 18 A as highlighted in red and blue in Fig. 13. The motors
Time are equipped with electronic brakes and a lever mechanism
Taken to manually actuate brakes. The brakes can be disengaged
(secs)
by providing 24 V DC power to the brake terminals. A 24 V
1 24.50 150 True - 30 A PWM motor driver is used to actuate the motor drivers
2 24.20 300 False Collided with based on the control input from Jetson Nano.
second Arduino Uno is used to deliver the signal to the motor
obstacle
driver. A 12v DC power supply is used to supply power to the
3 23.80 350 False Collided with
Arduino Uno and Jetson Nano has a separate 8 V power mod-
first obstacle
and the ule mounted at its bottom to provide a compact form factor.
walls The Realsense camera is mounted on the front making sure it
has the best visibility of the environment. The stereo camera
data is relayed to the Jetson Nano using a USB-C cable.
In this trail it was able to achieve success in only one
trial because of the complexity of the path. Since the path
had two dynamic obstacles moving in different directions, 5.2.2 Real‑Time Mapping and Localization
the wheelchair was able to avoid one obstacle but collided
with the other. The mapping and the localization were completely per-
To summarize, the wheelchair achieved a 100% suc- formed on the Jetson nano. We were able to map our lab and
cess rate while reaching a goal in a static environment. It navigate around it. RTABmap processes the RGB and the
did struggle sometimes to align itself in the direction of depth images captured by the Realsense camera. Features are
the path but it managed to recover from the situation and extracted from the captured images using the GFTT (Good
reach the goal. But in the case of dynamic environments, Features to Track) and ORB (Oriented FAST and Rotated
the wheelchair was only able to achieve a 70% success rate. BRIEF) algorithms as shown in Fig. 14. The extracted fea-
This proves that the wheelchair control algorithm is able to tures enable the algorithm to compare it with consecutive
detect the dynamic obstacle and navigate around it but still images that are in turn used for localizing the robot as shown
requires fine tuning to improve the success rate. in Fig. 15. The images are captured at a frequency of 5 Hz
and are stored in the database along with the camera’s intrin-
sic and extrinsic factors.
5.2 Hardware Implementation In very up close and complicated situations the algorithm
finds it difficult to localize itself due to a lack of features from
5.2.1 Hardware Setup the RGBD images. But the algorithm manages to solve the
issue by making use of data from an IMU sensor embedded in
Testing out the system in the real world was a challeng- the Realsense camera. In case both situations fail the algorithm
ing task. Because of the high risk of damaging the powered updates its location when the robot moves further and detects
wheelchair during the testing process, we decided to build a enough features on the RGBD image. The green and red axis,
basic model as shown in Fig. 12 that represents the base of shown in Fig. 14, represents the robot`s pose and orientation.
the wheelchair. The motors and the battery setup resemble In a real-world scenario, there are a lot of constantly mov-
that of the powered wheelchair which is rated at 24 V DC. A ing obstacles. It is necessary to map the environment such
wooden plank is mounted in the front of the battery to hold that it removes moving objects from the scenario and takes
the system modules. only the static.
Table 6 Simulation trials Trial No. Distance Real Time Goal reached Remarks
involving the wheelchair to plan (m) Taken
and navigate to random goal (secs)
points with Dynamic obstacles.
1 14.39 250 True -
2 24.42 210 True Moved very close to obstacle
3 18.52 400 False Failed to reach the goal because of collision
4 26.31 255 True Got stuck near the wall for few seconds
13
Journal of Signal Processing Systems
Figure 12 The basic hardware prototype with the system modules Figure 14 The features extracted from the input RGB-D camera data.
mounted on the front and a 24 V DC motor setup at the back.
Figure 13 An image labeling the different components of the hardware Figure 15 The wheelchair localizes itself in the mapped environment
system. using the stereo camera data also taking its orientation into account.
13
Journal of Signal Processing Systems
Declarations
Ethics Approval Not applicable.
Figure 16 A scenario wherein a person walks into the frame while
mapping the environment. The RTABmap algorithm manages to
remove the dynamic obstacle from the map.
References
1. Rathore, D. K., Srivastava, P., Pandey, S., & Jaiswal, S. (2014). A
3. RGBD cameras provide depth up to a certain distance novel multipurpose smart wheelchair. In 2014 IEEE students' con-
of 3 m in the case of Intel RealSense, depth estimate is ference on Electrical, Electronics and Computer Science. IEEE.
2. Simpson, R. C., Edmund, F., & Rory, A. (2008). Cooper. “How
often noisy and has a limited field of view.
many people would benefit from a smart wheelchair?" Journal of
4. Even Though a move base can quickly and safely plan Rehabilitation Research & Development 45.1.
a path, it lacks the ability to navigate the robot in nar- 3. Campos, J. J., & Berenthal, B. I. (1987). Loco motion and psycho-
row obstacle space and often get stuck. The algorithm’s logical development in infancy. In Childhood Powered Mobility:
Developmental, Technical and Clinical Perspectives, Seattle, WA,
recovery behavior mentioned in Section 3.3 kicks in but
RESNA Conference Proceeding 11–42.
sometimes the algorithm still struggles to get the wheel- 4. Fehr, L., Langbein, W., & Skaar, S. (2000). Adequacy of power
chair out of such scenarios. wheel chair control interfaces for persons with severe disabilities:
A clinical survey. Journal of Rehabilitation Research and Devel-
opment, 37.
5. Tomari, M. R., & Kuno, Y. (2012). Development of smart wheelchair
7 Conclusion system for a user with severe motor impairment. Procedia Engineer-
ing, 41, 538–546.
In this paper, we introduce a modular system that can be 6. Simpson, R. C., Poirot, D., & Baxter, F. (2002). The Hephaestus
mounted on an existing powered wheelchair to provide smart wheelchair system. IEEE Transactions on Neural Systems
and Rehabilitation Engineering, 10.2, 118–122.
autonomous assistive functionality to the disabled user. 7. Kuno, Y., Murashima, T., Shimada, N., & Shirai, Y. (2000). Inter-
The system is designed such that it is compact and cost- active gesture interface for intelligent wheelchairs. In 2000 IEEE
efficient but at the same time delivers the required sup- International Conference on Multimedia and Expo. ICME2000.
port for the users. Making full use of machine vision is Proceedings. Latest Advances in the Fast Changing World of Mul-
timedia (Cat. No. 00TH8532) (Vol. 2). IEEE.
difficult, but it can provide various opportunities as well 8. Data and Statistics for Cerebral Palsy. Available online: https://
as cut down the use of bulky and expensive sensors. By www.cdc.gov/ncbddd/cp
using open-source software, we provide developers to cre- 9. Takaya, K., Asai, T., Kroumov, V., & Smarandache, F. (2016).
ate their own applications to improve the product. Our Simulation environment for mobile robots testing using ROS and
Gazebo. In 2016 20th International Conference on System Theory,
wheelchair system is intended to be used in a dynamic Control and Computing (ICSTCC). IEEE. https://d oi.o rg/1 0.1 109/
environment, which may be a hospital or an indoor space ICSTCC.2016.7790647
inside a house. It will be able to assist the user to navigate 10. ROS Navigation Tuning Guide by Kaiyu Zheng September
to their specified destination with ease and safety. 2, 2016. https://kaiyuzheng.me/documents/navguide.pdf
11. Move base package summary. http://wiki.ros.org/move_base
Author Contributions Vaishanth Ramaraj implemented the ROS based
control system. Vaishanth Ramaraj and Atharva Chandrashekhar Paralikar Publisher’s Note Springer Nature remains neutral with regard to
contributed to the system integration and the hardware implementation. jurisdictional claims in published maps and institutional affiliations.
Eung-Joo Lee provided technical supports on the real-time aspects of the
project. Syed Muhammad Anwar provided technical supports on image Springer Nature or its licensor (e.g. a society or other partner) holds
processing aspect of the project and Reza Monfaredi provided technical exclusive rights to this article under a publishing agreement with the
supports on system integration and shared control system. author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of
Funding This project was internally funded by Sheikh Zayed Insti- such publishing agreement and applicable law.
tute for Pediatric Surgical Innovations at Children’s National Medical
Center.
13
Journal of Signal Processing Systems
13