Effettua una ricerca
Giulio Reina
Ruolo
Ricercatore
Organizzazione
Università del Salento
Dipartimento
Dipartimento di Ingegneria dell'Innovazione
Area Scientifica
Area 09 - Ingegneria industriale e dell'informazione
Settore Scientifico Disciplinare
ING-IND/13 - Meccanica Applicata alle Macchine
Settore ERC 1° livello
PE - Physical sciences and engineering
Settore ERC 2° livello
PE7 Systems and Communication Engineering: Electrical, electronic, communication, optical and systems engineering
Settore ERC 3° livello
PE7_10 Robotics
In natural outdoor settings, advanced perception systems and learning strategies are major requirement for an autonomous vehicle to sense and understand the surrounding environment, recognizing artificial and natural structures, topology, vegetation and drivable paths. Stereo vision has been used extensively for this purpose. However, conventional single-baseline stereo does not scale well to different depths of perception. In this paper, a multi-baseline stereo frame is introduced to perform accurate 3D scene reconstruction from near range up to several meters away from the vehicle. A classifier that segments the scene into navigable and non-navigable areas based on 3D data is also described. It incorporates geometric features within an online self-learning framework to model and identify traversable ground, without any a priori assumption on the terrain characteristics. The ground model is automatically retrained during the robot motion, thus ensuring adaptation to environmental changes. The proposed strategy is of general applicability for robot's perception and it can be implemented using any range sensor. Here, it is demonstrated for stereo-based data acquired by the multi-baseline device. Experimental tests, carried out in a rural environment with an off-road vehicle, are presented. It is shown that the use of a multi-baseline stereo frame allows for accurate reconstruction and scene segmentation at a wide range of visible distances, thus increasing the overall flexibility and reliability of the perception system.
This research aims to address the issue of safe navigation for autonomous vehicles in highly challenging outdoor environments. Indeed, robust navigation of autonomous mobile robots over long distances requires advanced perception means for terrain traversability assessment. Design/methodology/ approach - The use of visual systems may represent an efficient solution. This paper discusses recent findings in terrain traversability analysis from RGB-D images. In this context, the concept of point as described only by its Cartesian coordinates is reinterpreted in terms of local description. As a result, a novel descriptor for inferring the traversability of a terrain through its 3D representation, referred to as the unevenness point descriptor (UPD), is conceived. This descriptor features robustness and simplicity. Findings - The UPD-based algorithm shows robust terrain perception capabilities in both indoor and outdoor environment. The algorithm is able to detect obstacles and terrain irregularities. The system performance is validated in field experiments in both indoor and outdoor environments. Research limitations/implications - The UPD enhances the interpretation of 3D scene to improve the ambient awareness of unmanned vehicles. The larger implications of this method reside in its applicability for path planning purposes. Originality/value - This paper describes a visual algorithm for traversability assessment based on normal vectors analysis. The algorithm is simple and efficient providing fast real-time implementation, since the UPD does not require any data processing or previously generated digital elevation map to classify the scene. Moreover, it defines a local descriptor, which can be of general value for segmentation purposes of 3D point clouds and allows the underlining geometric pattern associated with each single 3D point to be fully captured and difficult scenarios to be correctly handled
An efficient and reliable onboard perception system is critical for a mobile robot to increase its degree of autonomy toward the accomplishment of the assigned task. In this regard, laser range sensors represent a feasible and promising solution that is rapidly gaining interest in the robotics community. This paper describes recent work of the authors in hardware and algorithm development of a 3-D laser scanner for mobile robot applications, which features low-cost, lightweight, compactness, and low power consumption. The sensor allows a vehicle to autonomously scan its environment and to generate an internal hazard representation of the world in the form of digital elevation maps. This suggests a general approach to terrain analysis in structured and unstructured environments for a safe and collision-free path planning. The proposed sensor system along with the algorithms for mapping and planning is validated in indoor laboratory experiments as well as in tests on natural terrain using an all-terrain rover.
This work presents an IR-based system for parking assistance and obstacle detection in the automotive field that employs the Microsoft Kinect camera for fast 3D point cloud reconstruction. In contrast to previous research that attempts to explicitly identify obstacles, the proposed system aims to detect “reachable regions” of the environment, i.e., those regions where the vehicle can drive to from its current position. A user-friendly 2D traversability grid of cells is generated and used as a visual aid for parking assistance. Given a raw 3D point cloud, first each point is mapped into individual cells, then, the elevation information is used within a graph-based algorithm to label a given cell as traversable or non-traversable. Following this rationale, positive and negative obstacles, as well as unknown regions can be implicitly detected. Additionally, no flat-world assumption is required. Experimental results, obtained from the system in typical parking scenarios, are presented showing its effectiveness for scene interpretation and detection of several types of obstacle.
This paper presents a novel approach to detect traversable and non-traversable regions of the environment from a depth image that could enhance mobility and safety of mobile robots through integration with localization, control and planning methods. The proposed system is based on Principal Component Analysis (PCA). PCA theory provides a powerful means to analyze 3D surfaces widely used in computer vision. It can be successfully applied, as well, to increase the degree of perception in autonomous vehicles, as new generations of 3D imaging sensors, including stereo and RGB-D-cameras, are increasingly introduced. The approach described in this paper is based on the estimation of the normal vector to a local surface leading to the definition of a novel, so-called, Unevenness Point Descriptor. Experimental results, obtained from indoor and outdoor environments, are presented to validate the system. It is demonstrated that the proposed approach can be effectively used for scene segmentation and it can efficiently handle difficult scenarios, including the presence of terrain slopes.
This paper describes the development of an electric car prototype, aimed at autonomous, energy-efficient driving. Starting with an urban electric car, we describe the mechanical and mechatronics add-ons required to automate its driving. In addition, a variety of exteroceptive and proprioceptive sensors have been installed in order to obtain accurate measurements for datasets aimed at characterizing dynamic models of the vehicle, including the complex problem of wheel-soil slippage. Current and voltage are also monitored at key points of the electric power circuits in order to obtain an accurate model of power consumption, with the goal of allowing predictive path planners to trace routes as a trade-off between path length and overall power consumption. In order to handle the required variety of sensors involved in the vehicle, a MOOS-based software architecture has been developed based on distributed nodes that communicate over an onboard local area network.We provide experimental results describing the current stage of development of this platform, where a number of datasets have been already grabbed successfully and initial work on dynamics modeling is being carried on.
Autonomous off-road ground vehicles require advanced perception systems in order to sense and understand the surrounding environment, while ensuring robustness under compromised visibility conditions. In this paper, the use of millimeter wave radar is proposed as a possible solution for all-weather off-road perception. A self-learning ground classifier is developed that segments radar data for scene understanding and autonomous navigation tasks. The proposed system comprises two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate appearance of radar data with class labels. Then, it makes predictions based on past observations. The training set is continuously updated online using the latest radar readings, thus making it feasible to use the system for long range and long duration navigation, over changing environments. Experimental results, obtained with an unmanned ground vehicle operating in a rural environment, are presented to validate this approach. Conclusions are drawn on the utility of millimeter-wave radar as a robotic sensor for persistent and accurate perception in natural scenarios.
Reliable terrain analysis is a key requirement for a mobile robot to operate safely in challenging environments, such as in natural outdoor settings. In these contexts, conventional navigation systems that assume a priori knowledge of the terrain geometric properties, appearance properties, or both, would most likely fail, due to the high variability of the terrain characteristics and environmental conditions. In this paper, a self-learning framework for ground detection and classification is introduced, where the terrain model is automatically initialized at the beginning of the vehicle's operation and progressively updated online. The proposed approach is of general applicability for a robot's perception purposes, and it can be implemented using a single sensor or combining different sensor modalities. In the context of this paper, two ground classification modules are presented: one based on radar data, and one based on monocular vision and supervised by the radar classifier. Both of them rely on online learning strategies to build a statistical feature-based model of the ground, and both implement a Mahalanobis distance classification approach for ground segmentation in their respective fields of view. In detail, the radar classifier analyzes radar observations to obtain an estimate of the ground surface location based on a set of radar features. The output of the radar classifier serves as well to provide training labels to the visual classification module. Once trained, the vision-based classifier is able to discriminate between ground and nonground regions in the entire field of view of the camera. It can also detect multiple terrain components within the broad ground class. Experimental results, obtained with an unmanned ground vehicle operating in a rural environment, are presented to validate the system. It is shown that the proposed approach is effective in detecting drivable surface, reaching an average classification accuracy of about 80% on the entire video frame with the additional advantage of not requiring human intervention for training or a priori assumption on the ground appearance.
Agricultural production must double by 2050 in order to meet the expected food demand due to population growth. Precision agriculture is the key to improve productivity and efficiency in the use of resources, thus helping to achieve this goal under the diverse challenges currently faced by agriculture mainly due to climate changes, land degradation, availability of farmable land, labor force shortage, and increasing costs. To face these challenges, precision agriculture uses and develops sensing methodologies that provide information about crop growth and health indicators. This paper presents a survey of the state-of-the-art in optical visible and near-visible spectrum sensors and techniques to estimate phenotyping variables from intensity, spectral, and volumetric measurements. The sensing methodologies are classified into three areas according to the purpose of the measurements: 1) plant structural characterization; 2) plant/fruit detection; and 3) plant physiology assessment. This paper also discusses the progress in data processing methods and the current open challenges in agricultural tasks in which the development of innovative sensing methodologies is required, such as pruning, fertilizer and pesticide management, crop monitoring, and automated harvesting.
The paper deals with the attenuation of mechanical vibrations in automotive suspensions using active vibration absorbers. The main features and performance of an automotive suspension featuring an active vibration absorber are assessed adopting a two-degree-of-freedom quarter-car model. The active vibration absorber is designed following a linear quadratic regulation (LQR) control law. A comparison of the proposed system with a suspension that uses a purely passive vibration absorber and a state-of-the-art active suspension is presented. The results of the numeric simulations show that active vibration absorbers could be effective in improving suspension handling and comfort performance. They can be as well as a possible alternative to standard active suspensions in terms of lower power consumption, simplicity and cost.
This paper describes recent efforts by the authors in the development of a robotic hand, referred to as the Adam’s hand. The end-effector is underactuated through a multiple bevel-gear differential system that is used to operate all five fingers, resulting in 15 degrees of freedom actuated by just 1 degree of actuation. Special focus is devoted to the transmission ratios and gear dimensions of the system to maintain the kinematic behaviour and the dimensions of the prototype as close as possible to that of human hand.
In this research, adaptive perception for driving automation is discussed so as to enable a vehicle to automatically detect driveable areas and obstacles in the scene. It is especially designed for outdoor contexts where conventional perception systems that rely on a priori knowledge of the terrain's geometric properties, appearance properties, or both, is prone to fail, due to the variability in the terrain properties and environmental conditions. In contrast, the proposed framework uses a self-learning approach to build a model of the ground class that is continuously adjusted online to reflect the latest ground appearance. The system also features high flexibility, as it can work using a single sensor modality or a multi-sensor combination. In the context of this research, different embodiments have been demonstrated using range data coming from either a radar or a stereo camera, and adopting self-supervised strategies where monocular vision is automatically trained by radar or stereo vision. A comprehensive set of experimental results, obtained with different ground vehicles operating in the field, are presented to validate and assess the performance of the system.
In the last few years, robotic technology has been increasingly employed in agriculture to develop intelligent vehicles that can improve productivity and competitiveness. Accurate and robust environmental perception is a critical requirement to address unsolved issues including safe interaction with field workers and animals, obstacle detection in controlled traffic applications, crop row guidance, surveying for variable rate applications, and situation awareness, in general, towards increased process automation. Given the variety of conditions that may be encountered in the field, no single sensor exists that can guarantee reliable results in every scenario. The development of a multi-sensory perception system to increase the ambient awareness of an agricultural vehicle operating in crop fields is the objective of the Ambient Awareness for Autonomous Agricultural Vehicles (QUAD-AV) project. Different onboard sensor technologies, namely stereovision, LIDAR, radar, and thermography, are considered. Novel methods for their combination are proposed to automatically detect obstacles and discern traversable from non-traversable areas. Experimental results, obtained in agricultural contexts, are presented showing the effectiveness of the proposed methods.
This paper presents an innovative suspension system with variable wheel camber to improve mobility of robots on rough-terrain. The system is optimized for planetary rovers that employ conventional rocker-type suspensions. The main advantage of the proposed system is that each wheel keeps an upright posture as the suspension system adapts to terrain unevenness, maximizing tractive and climbing performance, and reducing energy consumption. The synthesis of the variable camber mechanism is described along with details of the mechanical design, showing the feasibility of this solution.
Ground segmentation is critical for a mobile robot to successfully accomplish its tasks in challenging environments. In this paper, we propose a self-supervised radar-vision classification system that allows an autonomous vehicle, operating in natural terrains, to automatically construct online a visual model of the ground and perform accurate ground segmentation. The system features two main phases: the training phase and the classification phase. The training stage relies on radar measurements to drive the selection of ground patches in the camera images, and learn online the visual appearance of the ground. In the classification stage, the visual model of the ground can be used to perform high level tasks such as image segmentation and terrain classification, as well as to solve radar ambiguities. The proposed method leads to the following main advantages: (a) a self-supervised training of the visual classifier, where the radar allows the vehicle to automatically acquire a set of ground samples, eliminating the need for time-consuming manual labeling; (b) the ground model can be continuously updated during the operation of the vehicle, thus making it feasible the use of the system in long range and long duration navigation applications. This paper details the proposed system and presents the results of experimental tests conducted in the field by using an unmanned vehicle.
Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of high performance on rough terrain is tightly connected with the minimization of vehicle-terrain dynamics effects such as slipping and skidding. This paper presents a cross-coupled controller for a 4-wheel-drive/4-wheel-steer robot, which optimizes the wheel motors' control algorithm to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. Experimental results, obtained with an all-terrain rover operating on agricultural terrain, are presented to validate the system. It is shown that the proposed approach is effective in reducing slippage and vehicle posture errors.
The performance of photovoltaic cells greatly depends on the incident angle of the solar radiation. Their power output can be dramatically enhanced using a solar tracker, which forces sunlight to be incident normally (perpendicularly) to the cells at all times. This paper describes the design of cost-effective solar trackers that are passively activated by the thermal expansion of a low-boiling point liquid. Three possible embodiments of solar tracker are presented and their functional study discussed in detail.
In this paper, an action planning algorithm is presented for a reconfigurable hybrid leg–wheel mobile robot. Hybrid leg–wheel robots have recently receiving growing interest from the space community to explore planets, as they offer a solution to improve speed and mobility on uneven terrain. One critical issue connected with them is the study of an appropriate strategy to define when to use one over the other locomotion mode, depending on the soil properties and topology. Although this step is crucial to reach the full hybrid mechanism’s potential, little attention has been devoted to this topic. Given an elevation map of the environment, we developed an action planner that selects the appropriate locomotion mode along an optimal path toward a point of scientific interest. This tool is helpful for the space mission team to decide the next move of the robot during the exploration. First, a candidate path is generated based on topology and specifications’ criteria functions. Then, switching actions are defined along this path based on the robot’s performance in each motion mode. Finally, the path is rated based on the energy profile evaluated using a dynamic simulator. The proposed approach is applied to a concept prototype of a reconfigurable hybrid wheel–leg robot for planetary exploration through extensive simulations and real experiments.
In the last few years, driver assistance systems are increasingly being investigated in the automotive field to provide a higher degree of safety and comfort. Lane position determination plays a critical role toward the development of autonomous and computer-aided driving. This paper presents an accurate and robust method for detecting road markings with applications to autonomous vehicles and driver support. Much like other lane detection systems, ours is based on computer vision and Hough transform. The proposed approach, however, is unique in that it uses fuzzy reasoning to combine adaptively geometrical and intensity information of the scene in order to handle varying driving and environmental conditions. Since our system uses fuzzy logic operations for lane detection and tracking, we call it “FLane.” This paper also presents a method for building the initial lane model in real time, during vehicle motion, and without any a priori information. Details of the main components of the FLane system are presented along with experimental results obtained in the field under different lighting and road conditions.
This paper is presenting the ongoing work toward a novel driving assistance system of a robotic wheelchair, for people paralyzed from down the neck. The user's head posture is tracked, to accordingly project a colored spot on the ground ahead, with a pan-tilt mounted laser. The laser dot on the ground represents a potential close range destination the operator wants to reach autonomously. The wheelchair is equipped with a low cost depth-camera (Kinect sensor) that models a traversability map in order to define if the designated destination is reachable or not by the chair. If reachable, the red laser dot turns green, and the operator can validate the wheelchair destination via an Electromyogram (EMG) device, detecting a specific group of muscle's contraction. This validating action triggers the calculation of a path toward the laser pointed target, based on the traversability map. The wheelchair is then controlled to follow this path autonomously. In the future, the stream of 3D point cloud acquired during the process will be used to map and self localize the wheelchair in the environment, to be able to correct the estimate of the pose derived from the wheel's encoders.
Environment awareness through advanced sensing systems is a major requirement for a mobile robot to operate safely, particularly when the environment is unstructured, as in an outdoor setting. In this paper, a multi-sensory approach is proposed for automatic traversable ground detection using 3D range sensors. Specifically, two classifiers are presented, one based on laser data and one based on stereovision. Both classifiers rely on a self-learning scheme to detect the general class of ground and feature two main stages: an adaptive training stage and a classification stage. In the training stage, the classifier learns to associate geometric appearance of 3D data with class labels. Then, it makes predictions based on past observations. The output obtained from the single-sensor classifiers is statistically combined exploiting their individual advantages in order to reach an overall better performance than could be achieved by using each of them separately. Experimental results, obtained with a test bed platform operating in a rural environment, are presented to validate this approach, showing its effectiveness for autonomous safe navigation.
Reliable assessment of terrain traversability using multi-sensory input is a key issue for driving automation, particularly when the domain is unstructured or semi-structured, as in natural environments. In this paper, LIDAR-stereo combination is proposed to detect traversable ground in outdoor applications. The system integrates two self-learning classifiers, one based on LIDAR data and one based on stereo data, to detect the broad class of drivable ground. Each single-sensor classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the classifier automatically learns to associate geometric appearance of 3D data with class labels. Then, it makes predictions based on past observations. The output obtained from the single-sensor classifiers are statistically combined in order to exploit their individual strengths and reach an overall better performance than could be achieved by using each of them separately. Experimental results, obtained with a test bed platform operating in rural environments, are presented to validate and assess the performance of this approach, showing its effectiveness and potential applicability to autonomous navigation in outdoor contexts.
Future outdoor mobile robots will have to explore larger and larger areas, performing difficult tasks, while preserving, at the same time, their safety. This will primarily require advanced sensing and perception capabilities. Video sensors supply contact-free, precise measurements and are flexible devices that can be easily integrated with multi-sensor robotic platforms. Hence, they represent a potential answer to the need of new and improved perception capabilities for autonomous vehicles. One of the main applications of vision in mobile robotics is localization. For mobile robots operating on rough terrain, conventional dead reckoning techniques are not well suited, since wheel slipping, sinkage, and sensor drift may cause localization errors that accumulate without bound during the vehicle’s travel. Conversely, video sensors are exteroceptive devices, that is, they acquire information from the robot’s environment; therefore, vision-based motion estimates are independent of the knowledge of terrain properties and wheel-terrain interaction. Indeed, like dead reckoning, vision could lead to accumulation of errors; however, it has been proved that, compared to dead reckoning, it allows more accurate results and can be considered as a promising solution to the problem of robust robot positioning in high-slip environments. As a consequence, in the last few years, several localization methods using vision have been developed. Among them, visual odometry algorithms, based on the tracking of visual features over subsequent images, have been proved particularly effective. Accurate and reliable methods to sense slippage and sinkage are also desirable, since these effects compromise the vehicle’s traction performance, energy consumption and lead to gradual deviation of the robot from the intended path, possibly resulting in large drift and poor results of localization and control systems. For example, the use of conventional dead-reckoning technique is largely compromised, since it is based on the assumption that wheel revolutions can be translated into correspondent linear displacements. Thus, if one wheel slips, then the associated encoder will register revolutions even though these revolutions do not correspond to a linear displacement of the wheel. Conversely, if one wheel skids, fewer encoder pulses will be counted. Slippage and sinkage measurements are also valuable for terrain identification according to the classical terramechanics theory. This chapter investigates vision-based onboard technology to improve mobility of robots on natural terrain. A visual odometry algorithm and two methods for online measurement of vehicle slip angle and wheel sinkage, respectively, are discussed. Tests results are presented showing the performance of the proposed approaches using an all-terrain rover moving across uneven terrain.
Future mobile robots will have to explore larger and larger areas, performing difficult tasks, while preserving, at the same time, their safety. This will primarily require advanced sensing and perception capabilities. In this respect, laser range sensors represent a feasible and promising solution that is rapidly gaining interest in the robotics community. This paper describes recent work of the authors in hardware and algorithm development of a 3-D laser scanner for mobile robot applications, which features cost effectiveness, lightweight, compactness, and low power consumption. The sensor allows an autonomous vehicle to scan its environment and to generate an internal hazard representation of the world in the form of digital elevation map. Details of the device are presented along with a thorough performance analysis as function of the relevant operational parameters, such as elevation and nodding angular rate. The generation of elevation models is also investigated, addressing the issues connected with the presence of overhanging objects and occluded areas.
This paper introduces a novel method for slip angle estimation based on visually observing the traces produced by the wheels of a robot on soft, deformable terrain. The proposed algorithm uses a robust Hough transform enhanced by fuzzy reasoning to estimate the angle of inclination of the wheel trace with respect to the vehicle reference frame. Any deviation of the wheel track from the planned path of the robot suggests occurrence of sideslip that can be detected and, more interestingly, measured. In turn, the knowledge of the slip angle allows encoder readings affected by wheel slip to be adjusted and the accuracy of the position estimation system to be improved, based on an integrated longitudinal and lateral wheel–terrain slip model. The description of the visual algorithm and the odometry correction method is presented, and a comprehensive set of experimental results is included to validate this approach.
Surface irregularity acts as a major excitation source in off-road driving that induces vibration of the vehicle body through the tire assembly and the suspension system. When adding ground deformability, this excitation is modulated by the soil properties and operating conditions. The underlying mechanisms that govern ground behavior can be explained and modeled drawing on Terramechanics. Based on this theory, a comprehensive quarter-car model of off-road vehicle is presented that takes into account tire/soil interaction. The model can handle the general case of compliant wheel rolling on compliant ground and it allows ride and road holding performance to be evaluated in the time and frequency domain. An extensive set of simulation tests is included to assess the impact of various surface roughness and ground deformability through a parameter study, showing the potential of the proposed model to describe the behavior of off-road vehicles for design and performance optimization purposes.
This paper deals with the study of a land-yacht, that is a ground vehicle propelled by wind energy. There is a large interest in exploring alternative source of energy for propulsion and wind energy could be a feasible solution being totally green, available and free. The idea envisaged by a land-yacht is that of using one or several flexible or rigid vertical wing-sails to produce a thrust-force, which can eventually generate a higher travel velocity than its prevailing wind. A model of a three-wheel land-yacht is presented capturing the main dynamic and aerodynamic aspects of the system behaviour. Simulations are included showing how environment conditions, i.e. wind intensity and direction, influence the vehicle response and performance. In view of a robotic embodiment of the vehicle, a controller of the sail trim angle and front wheel steer angle is also discussed for autonomous navigation.
Pavement distresses and potholes represent road hazards that can cause accidents and damages to vehicles. The latter may vary from a simple flat tyre to serious failures of the suspension system, and in extreme cases to collisions with third-party vehicles and even endanger passengers' lives. The primary scientific aim of this study is to investigate the problem of road hazard detection for driving assistance purposes, towards the final goal of implementing such a technology on future intelligent vehicles. The proposed approach uses a depth sensor to generate an environment representation in terms of 3D point cloud that is then processed by a normal vector-based analysis and presented to the driver in the form of a traversability grid. Even small irregularities of the road surface can be successfully detected. This information can be used either to implement driver warning systems or to generate, using a cost-to-go planning method, optimal trajectories towards safe regions of the carriageway. The effectiveness of this approach is demonstrated on real road data acquired during an experimental campaign. Normal analysis and path generation are performed in post-analysis. This approach has been demonstrated to be promising and may help to drastically reduce fatal traffic casualties, as a high percentage of road accidents are related to pavement distress.
Radar overcomes the shortcomings of laser, stereovision, and sonar because it can operate successfully in dusty, foggy, blizzard-blinding, and poorly lit scenarios. This paper presents a novel method for ground and obstacle segmentation based on radar sensing. The algorithm operates directly in the sensor frame, without the need for a separate synchronised navigation source, calibration parameters describing the location of the radar in the vehicle frame, or the geometric restrictions made in the previous main method in the field. Experimental results are presented in various urban scenarios to validate this approach, showing its potential applicability for advanced driving assistance systems and autonomous vehicle operations.
Autonomous vehicle operations in outdoor environments challenge robotic perception. Construction, mining, agriculture, and planetary exploration environments are examples in which the presence of dust, fog, rain, changing illumination due to low sun angles, and lack of contrast can dramatically degrade conventional stereo and laser sensing. Nonetheless, environment perception can still succeed under compromised visibility through the use of a millimeter-wave radar. Radar also allows for multiple object detection within a single beam, whereas other range sensors are limited to one target return per emission. However, radar has shortcomings as well, such as a large footprint, specularity effects, and limited range resolution, all of which may result in poor environment survey or difficulty in interpretation. This paper presents a novelmethod for ground segmentation using a millimeter-wave radar mounted on a ground vehicle. Issues relevant to short-range perception in an outdoor environment are described along with field experiments and a quantitative comparison to laser data. The ability to classify the ground is successfully demonstrated in clear and low-visibility conditions, and significant improvement in range accuracy is shown. Finally, conclusions are drawn on the utility of millimeter-wave radar as a robotic sensor for persistent and accurate perception in natural scenarios.
This paper presents a radar-vision classification approach to segment the visual scene into ground and nonground regions. The proposed system features two main phases: a radar-supervised training phase and a visual classification phase. The training stage relies on a radar-based classifier to drive the selection of ground patches in the camera images, and learn online the visual appearance of the ground. In the classification stage, the visual model of the ground is used for image segmentation. Experimental results, obtained with an unmanned ground vehicle operating in a rural environment, are presented to validate the proposed system.
In order to increase the level of driving automation in future cars, it is important to address critical issues, including road monitoring for irregularities and hazard detection. In this concern, the primary sci- entific aim of this research is to investigate the problem of road surface analysis in urban and extra-urban scenarios for driving assistance pur- poses towards the final goal of implementing such technologies on future driverless cars. The proposed approach uses a range sensor to generate an environment representation in terms of 3D point cloud that is then processed by a normal vector-based analysis. Even small irregularities of the road surface can be successfully detected, using such information to warn the driver or enable an autonomous vehicle to regulate its speed and change its course appropriately.
Autonomous driving is a challenging problem in mobile robotics, particularly when the domain is unstructured, as in an outdoor setting. In addition, field scenarios are often characterized by low visibility as well, due to changes in lighting conditions, weather phenomena including fog, rain, snow and hail, or the presence of dust clouds and smoke. Thus, advanced perception systems are primarily required for an off-road robot to sense and understand its environment recognizing artificial and natural structures, topology, vegetation and paths, while ensuring, at the same time, robustness under compromised visibility. In this paper the use of millimeter-wave radar is proposed as a possible solution for allweather off-road perception. A self-learning approach is developed to train a classifier for radar image interpretation and autonomous navigation. The proposed classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate the appearance of radar data with class labels. Then, it makes predictions based on past observations. The training set is continuously updated online using the latest radar readings, thus making it feasible to use the system for long range and long duration navigation, over changing environments. Experimental results, obtained with an unmanned ground vehicle operating in a rural environment, are presented to validate this approach. A quantitative comparison with laser data is also included showing good range accuracy and mapping ability as well. Finally, conclusions are drawn on the utility of millimeter-wave radar as a robotic sensor for persistent and accurate perception in natural scenarios.
A multi-sensor approach for terrain estimation is proposed using a combination of complementary optical sensors that cover the visible (VIS), near infrared (NIR) and infrared (IR) spectrum. The sensor suite includes a stereovision sensor, a VIS-NIR camera and a thermal camera, and it is intended to be mounted on board an agricultural vehicle, pointing downward to scan the portion of the terrain ahead. A method to integrate the different sensor data and create a multi-modal dense 3D terrain map is presented. The stereovision input is used to generate 3D point clouds that incorporate RGB-D information, whereas the VIS-NIR camera and the thermal sensor are employed to extract respectively spectral signatures and temperature information, to characterize the nature of the observed surfaces. Experimental tests carried out by an off-road vehicle are presented, showing the feasibility of the proposed approach.
For mobile robots operating in outdoor environments, per- ception is a critical task. Construction, mining, agriculture, and planetary exploration are common examples where the presence of dust, smoke, and rain, and the change in lighting conditions can dramatically degrade conventional vision and laser sensing. Nonetheless, environment perception can still succeed under compromised visibility through the use of a millimeter-wave radar. This paper presents a novel method for scene segmentation using a short-range radar mounted on a ground vehicle. Issues relevant to radar perception in an outdoor environment are described along with eld experiments and a quantitative comparison to laser data. The ability to classify the scene and signicant improvement in range accuracy are demonstrated showing the utility of millimeter-wave radar as a robotic sensor for persistent and accurate perception in natural scenarios.
In this paper, a novel approach for online terrain characterisation is presented using a skid-steer vehicle. In the context of this research, terrain characterisation refers to the estimation of physical parameters that affects the terrain ability to support vehicular motion. These parameters are inferred from the modelling of the kinematic and dynamic behaviour of a skid-steer vehicle that reveals the underlying relationships governing the vehicle-terrain interaction. The concept of slip track is introduced as a measure of the slippage experienced by the vehicle during turning motion. The proposed terrain estimation system includes common onboard sensors, that is, wheel encoders, electrical current sensors and yaw rate gyroscope. Using these components, the system can characterise terrain online during normal vehicle operations. Experimental results obtained from different surfaces are presented to validate the system in the field showing its effectiveness and potential benefits to implement adaptive driving assistance systems or to automatically update the parameters of onboard control and planning algorithms.
Advances in precision agriculture greatly rely on innovative control and sensing technologies that allow service units to increase their level of driving automation while ensuring at the same time high safety standards. This paper deals with automatic terrain estimation and classification that is performed simultaneously by an agricultural vehicle during normal operations. Vehicle mobility and safety, and the successful implementation of important agricultural tasks including seeding, ploughing, fertilising and controlled traffic depend or can be improved by a correct identification of the terrain that is traversed. The novelty of this research lies in that terrain estimation is performed by using not only traditional appearance-based features, that is colour and geometric properties, but also contact-based features, that is measuring physics-based dynamic effects that govern the vehicle–terrain interaction and that greatly affect its mobility. Experimental results obtained from an all-terrain vehicle operating on different surfaces are presented to validate the system in the field. It was shown that a terrain classifier trained with contact features was able to achieve a correct prediction rate of 85.1%, which is comparable or better than that obtained with approaches using traditional feature sets. To further improve the classification performance, all feature sets were merged in an augmented feature space, reaching, for these tests, 89.1% of correct predictions.
Autonomous vehicles are being increasingly adopted in agriculture to improve productivity and efficiency. For an autonomous agricultural vehicle to operate safely, environment perception and interpretation capabilities are fundamental requirements. The Ambient Awareness for Autonomous Agricultural Vehicles (QUAD-AV) project explores a multi-sensory approach to provide an autonomous agricultural vehicle with such ambient awareness. The proposed methods and systems will aim at increasing the overall level of safety of an autonomous agricultural vehicle with respect to itself, to people and animals as well as to property. The “obstacle detection” problem is specifically addressed within the QUAD-AV project. The paper focuses on the presentation of the different selected technologies (vision/stereovision, thermography, ladar, microwave radar) through the presentation of preliminary results.
Autonomous driving is a challenging problem, particularly when the domain is unstructured, as in an outdoor agricultural setting. Thus, advanced perception systems are primarily required to sense and understand the surrounding environment recognizing artificial and natural structures, topology, vegetation and paths. In this paper, a self-learning framework is proposed to automatically train a ground classifier for scene interpretation and autonomous navigation based on multi-baseline stereovision. The use of rich 3D data is emphasized where the sensor output includes range and color information of the surrounding environment. Two distinct classifiers are presented, one based on geometric data that can detect the broad class of ground and one based on color data that can further segment ground into subclasses. The geometry-based classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate geometric appearance of 3D stereo-generated data with class labels. Then, it makes predictions based on past observations. It serves as well to provide training labels to the color-based classifier. Once trained, the color-based classifier is able to recognize similar terrain classes in stereo imagery. The system is continuously updated online using the latest stereo readings, thus making it feasible for long range and long duration navigation, over changing environments. Experimental results, obtained with a tractor test platform operating in a rural environment, are presented to validate this approach, showing an average classification precision and recall of 91.0% and 77.3%, respectively
Mobile robots are increasingly being employed in challenging outdoor applications including search and rescue for disaster recovery, construction, mining, agriculture, military and planetary exploration. In this kind of robotic applications, the accuracy and robustness of the motion control system is greatly affected by the occurrence of undesired dynamics effects such as wheel slippage. In this paper a cross-coupled controller is presented that can be integrated with 4-wheeldrive/ 4-wheel-steer robots to optimize the wheel motors’ control algorithm and reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. Experimental results, obtained with an all-terrain rover operating outdoor, are presented to validate this approach showing its effectiveness in reducing slippage and vehicle posture errors.
In the last few years, many closed-loop control systems have been introduced in the automotive field to increase the level of safety and driving automation. For the integration of such systems, it is critical to estimate motion states and parameters of the vehicle that are not exactly known or that change over time. This paper presents a model-based ob- server to assess online key motion and mass properties. It uses common onboard sensors, i.e. a gyroscope and an accelerometer, and it aims to work during normal vehicle man- oeuvres, such as turning motion and passing. First, basic lateral dynamics of the vehicle is discussed. Then, a parameter estimation framework is presented based on an Extended Kalman filter. Results are included to demonstrate the effectiveness of the estimation approach and its potential benefit towards the implementation of adaptive driving as- sistance systems or to automatically adjust the parameters of onboard controllers.
Imaging sensors are being increasingly used in autonomous vehicle applications for scene understanding. This paper presents a method that combines radar and monocular vision for ground modelling and scene segmentation by a mobile robot operating in outdoor environments. The proposed system features two main phases: a radar-supervised training phase and a visual classification phase. The training stage relies on radar measurements to drive the selection of ground patches in the camera images, and learn online the visual appearance of the ground. In the classification stage, the visual model of the ground can be used to perform high level tasks such as image segmentation and terrain classification, as well as to solve radar ambiguities. This method leads to the following main advantages: (a) self-supervised training of the visual classifier across the portion of the environment where radar overlaps with the camera field of view. This avoids time-consuming manual labelling and enables on-line implementation; (b) the ground model can be continuously updated during the operation of the vehicle, thus making it feasible the use of the system in long range and long duration applications. This paper details the algorithms and presents experimental tests conducted in the field using an unmanned vehicle.
For mobile robots driving across natural terrains, it is critical that the dynamic effects occurring at the wheel–terrain interface be taken into account. One of the most prevalent of these effects is wheel sinkage. Wheels can sink to depths sufficient to prevent further motion, possibly leading to danger of entrapment. This paper presents an algorithm for visual estimation of sinkage. We call it the visual sinkage estimation (VSE) method. It assumes the presence of a monocular camera and an artificial pattern, attached to the wheel side, to determine the terrain contact angle. This paper also introduces an analytical model for wheel sinkage in deformable terrain based on terramechanics. To validate the VSE module, firstly, several tests are performed on a single-wheel test bed, under different operating conditions. Secondly, the effectiveness of the proposed approach is proved in real contexts, employing an all-terrain rover travelling on a sandy beach.
A long range visual perception system is presented based on a multi-baseline stereo frame. The system is intended to be used onboard an autonomous vehicle operating in natural settings, such as an agricultural environment, to perform 3D scene reconstruction and segmentation tasks. First, the multi-baseline stereo sensor and the associated processing algorithms are described; then, a self-learning ground classifier is applied to segment the scene into ground and non-ground regions, using geometric features, without any a priori assumption on the terrain characteristics. Experimental results obtained with an off-road vehicle operating in an agricultural test field are presented to validate the proposed approach. It is shown that the use of a multi-baseline stereo frame allows for accurate reconstruction and scene segmentation at a wide range of viewing distances, thus increasing the overall flexibility and reliability of the perception system
Purpose – The purpose of this paper is to evaluate the locomotion performance of all-terrain rovers employing rocker-type suspension system. Design/methodology/approach – In this paper, a robot with advanced mobility features is presented and its locomotion performance is evaluated, following an analytical approach via extensive simulations. The vehicle features an independently controlled four-wheel-drive/4-wheel-steer architecture and it also employs a passive rocker-type suspension system that improves the ability to traverse uneven terrain. An overview of modeling techniques for rover-like vehicles is introduced. First, a method for formulating a kinematic model of an articulated vehicle is presented. Next, a method for expressing a quasi-static model of forces acting on the robot is described. A modified rocker-type suspension is also proposed that enables wheel camber change, allowing each wheel to keep an upright posture as the suspension conforms to ground unevenness. Findings – The proposed models can be used to assess the locomotion performance of a mobile robot on rough-terrain for design, control and path planning purposes. The advantage of the rocker-type suspension over conventional spring-type counterparts is demonstrated. The variable camber suspension is shown to be effective in improving a robot’s traction and climbing ability. Research limitations/implications – The paper can be of great value when studying and optimizing the locomotion performance of mobile robots on rough terrain. These models can be used as a basis for advanced design, control and motion planning. Originality/value – The paper describes an analytical approach for the study of the mobility characteristics of vehicles endowed with articulated suspension systems. A variable camber mechanism is also presented.
Condividi questo sito sui social