Download Free Vision Based Autonomous Robot Navigation Book in PDF and EPUB Free Download. You can read online Vision Based Autonomous Robot Navigation and write the review.

This monograph is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book describes successful implementation of integration of low-cost, external peripherals, with off-the-shelf procured robots. An important highlight of the book is that it presents a detailed, step-by-step sample demonstration of how vision-based navigation modules can be actually implemented in real life, under 32-bit Windows environment. The book also discusses the concept of implementing vision based SLAM employing a two camera based system.
Autonomous robots can replace humans to explore hostile areas, such as Mars and other inhospitable regions. A fundamental task for the autonomous robot is navigation. Due to the inherent difficulties in understanding natural objects and changing environments, navigation for unstructured environments, such as natural environments, has largely unsolved problems. However, navigation for ill-structured environments [1], where roads do not disappear completely, increases the understanding of these difficulties. We develop algorithms for robot navigation on ill-structured roads with monocular vision based on two elements: the appearance information and the geometric information. The fundamental problem of the appearance information-based navigation is road presentation. We propose a new type of road description, a vision vector space (V2-Space), which is a set of local collision-free directions in image space. We report how the V2-Space is constructed and how the V2-Space can be used to incorporate vehicle kinematic, dynamic, and time-delay constraints in motion planning. Failures occur due to the limitations of the appearance information-based navigation, such as a lack of geometric information. We expand the research to include consideration of geometric information. We present the vision-based navigation system using the geometric information. To compute depth with monocular vision, we use images obtained from different camera perspectives during robot navigation. For any given image pair, the depth error in regions close to the camera baseline can be excessively large. This degenerated region is named untrusted area, which could lead to collisions. We analyze how the untrusted areas are distributed on the road plane and predict them accordingly before the robot makes its move. We propose an algorithm to assist the robot in avoiding the untrusted area by selecting optimal locations to take frames while navigating. Experiments show that the algorithm can significantly reduce the depth error and hence reduce the risk of collisions. Although this approach is developed for monocular vision, it can be applied to multiple cameras to control the depth error. The concept of an untrusted area can be applied to 3D reconstruction with a two-view approach.
The two-volume set IFIP AICT 363 and 364 constitutes the refereed proceedings of the 12th International Conference on Engineering Applications of Neural Networks, EANN 2011, and the 7th IFIP WG 12.5 International Conference, AIAI 2011, held jointly in Corfu, Greece, in September 2011. The 52 revised full papers and 28 revised short papers presented together with 31 workshop papers were carefully reviewed and selected from 150 submissions. The first volume includes the papers that were accepted for presentation at the EANN 2011 conference. They are organized in topical sections on computer vision and robotics, self organizing maps, classification/pattern recognition, financial and management applications of AI, fuzzy systems, support vector machines, learning and novel algorithms, reinforcement and radial basis function ANN, machine learning, evolutionary genetic algorithms optimization, Web applications of ANN, spiking ANN, feature extraction minimization, medical applications of AI, environmental and earth applications of AI, multi layer ANN, and bioinformatics. The volume also contains the accepted papers from the Workshop on Applications of Soft Computing to Telecommunication (ASCOTE 2011), the Workshop on Computational Intelligence Applications in Bioinformatics (CIAB 2011), and the Second Workshop on Informatics and Intelligent Systems Applications for Quality of Life Information Services (ISQLIS 2011).
The field of robotic vision has advanced dramatically recently with the development of new range sensors. Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related manipulation motion models. For autonomous robot navigation, different vision-based localization and tracking strategies and algorithms are discussed. New approaches using probabilistic analysis for robot navigation, online learning of vision-based robot control, and 3D motion estimation via intensity differences from a monocular camera are described. This collection will be beneficial to graduate students, researchers, and professionals working in the area of robotic vision.
The ability to acquire and respond appropriately to targets or obstacles. moving or stationary, while underway, is critical for all unmanned mobile robot applications. This is achieved by most animate systems, but has proven difficult for artificial systems. We propose that efficient and extensible solutions to the target acquisition, discrimination, and maintenance problem may be found when the machine sensor-effector control algorithms emulate the mechanisms employed by biological systems. In nature. visual motion provides the basis for these functions. Because visual motion can be due either to target motion or to platform motion, a method of motion segmentation must be found. We present a solution to this problem that emulates natural strategies, and describe its implementation in an autonomous visually controlled mobile robot.
Many robotics researchers consider high-level vision algorithms (computational) too expensive for use in robot guidance. This book introduces the reader to an alternative approach to perception for autonomous, mobile robots. It explores how to apply methods of high-level computer vision and fuzzy logic to the guidance and control of the mobile robot. The book introduces a knowledge-based approach to vision modeling for robot guidance, where advantage is taken of constraints of the robot's physical structure, the tasks it performs, and the environments it works in. This facilitates high-level computer vision algorithms such as object recognition at a speed that is sufficient for real-time navigation. The texts presents algorithms that exploit these constraints at all levels of vision, from image processing to model construction and matching, as well as shape recovery. These algorithms are demonstrated in the navigation of a wheeled mobile robot.
The book is intended for advanced students in physics, mathematics, computer science, electrical engineering, robotics, engine engineering and for specialists in computer vision and robotics on the techniques for the development of vision-based robot projects. It focusses on autonomous and mobile service robots for indoor work, and teaches the techniques for the development of vision-based robot projects. A basic knowledge of informatics is assumed, but the basic introduction helps to adjust the knowledge of the reader accordingly. A practical treatment of the material enables a comprehensive understanding of how to handle specific problems, such as inhomogeneous illumination or occlusion. With this book, the reader should be able to develop object-oriented programs and show mathematical basic understanding. Such topics as image processing, navigation, camera types and camera calibration structure the described steps of developing further applications of vision-based robot projects.
The aspiration of this research work is to design vision based intelligent mobile robot navigation techniques to capacitate an autonomous robot to navigate in indoor/outdoor environments. The images from mobile robot, accoutered with a camera, are preprocessed and are used for drivable region extraction. A novel weighted matrix algorithm (WMA) is used for drivable region division for, both, constructed and unconstructed scenarios. Hurdle distance, reckoned from ultrasonic sensors information, is used in conjunction with extracted drivable road to generate the motion commands for road following and hurdle avoidance behavior. Navigation controller design has been actualized in MATLAB by exerting fuzzy logic soft computing techniques to engender the sought motion commands. The designed controllers are evaluated in a simulator for assorted hurdle and environment conditions prior to real time implementation on archetype robot.