Open Autonomous Navigation Platform for Power Wheelchairs
Computer vision has many applications in robotics. This project aims to use computer vision as a feedback loop for an electric wheelchair . Allowing it to navigate the indoor enviroment. At this point the project has developed a mechanism for Remote Control of the wheelchair over a wireless network with control achievable through a web browser.
Contorl and Response signals can be sent and received from the wheelchair with a GET function written in PHP. This has demonstrated a high-speed respnse to commands. Kinect Camera Streams for both the RGB and Depth Image are also avalible for processing through an MJPEG stream. This allows computationally intesive image processing and navigation decisions to be made remotely reducing power consumption at the wheelchair itself.
This project is based largely on interfacing with a physcial hardware, such as motor, relays, etc. As a result a list of hardware dependencies has also been included.
For manual control via web browser the following dependencies are required;
For autonomous navigation the above dependencies and the follow are requird;
conda install -c anaconda numpy
conda install -c conda-forge opencv
conda install -c conda-forge matplotlib
conda install pip
pip install imutils
4.0.0-alpha
. However, you can use the most up to date version. This tutorial will get you started at pyimagesearchThis source code is copyright, All rights reserved, Ryan McCartney, 2018