Effettua una ricerca
Annalisa Milella
Ruolo
III livello - Ricercatore
Organizzazione
Consiglio Nazionale delle Ricerche
Dipartimento
Non Disponibile
Area Scientifica
AREA 09 - Ingegneria industriale e dell'informazione
Settore Scientifico Disciplinare
ING-INF/05 - Sistemi di Elaborazione delle Informazioni
Settore ERC 1° livello
PE - PHYSICAL SCIENCES AND ENGINEERING
Settore ERC 2° livello
PE7 Systems and Communication Engineering: Electrical, electronic, communication, optical and systems engineering
Settore ERC 3° livello
PE7_10 Robotics
In natural outdoor settings, advanced perception systems and learning strategies are major requirement for an autonomous vehicle to sense and understand the surrounding environment, recognizing artificial and natural structures, topology, vegetation and drivable paths. Stereo vision has been used extensively for this purpose. However, conventional single-baseline stereo does not scale well to different depths of perception. In this paper, a multi-baseline stereo frame is introduced to perform accurate 3D scene reconstruction from near range up to several meters away from the vehicle. A classifier that segments the scene into navigable and non-navigable areas based on 3D data is also described. It incorporates geometric features within an online self-learning framework to model and identify traversable ground, without any a priori assumption on the terrain characteristics. The ground model is automatically retrained during the robot motion, thus ensuring adaptation to environmental changes. The proposed strategy is of general applicability for robot's perception and it can be implemented using any range sensor. Here, it is demonstrated for stereo-based data acquired by the multi-baseline device. Experimental tests, carried out in a rural environment with an off-road vehicle, are presented. It is shown that the use of a multi-baseline stereo frame allows for accurate reconstruction and scene segmentation at a wide range of visible distances, thus increasing the overall flexibility and reliability of the perception system.
The field of multi-robot systems is one of the main research topics in robotics, as robot networks offer great advantages in terms of reliability and efficiency in many application domains. This paper focuses on the problem of mutual localization and 3D cooperative environment mapping using a heterogeneous multi-robot team. The proposed algorithm relies on the exchange of local maps and is totally distributed; no assumption on a common reference frame is done. The developed strategy is robust to failures, scalable with the number of the robots in the network, and has been validated through an experimental campaign.
Distributed networks of sensors have been recognized to be a powerful tool for developing fully automated systems that monitor environments and human activities. Nevertheless, problems such as active control of heterogeneous sensors for high-level scene interpretation and mission execution are open. This paper presents the authors' ongoing research about design and implementation of a distributed heterogeneous sensor network that includes static cameras and multi-sensor mobile robots. The system is intended to provide robot-assisted monitoring and surveillance of large environments. The proposed solution exploits a distributed control architecture to enable the network to autonomously accomplish general-purpose and complex monitoring tasks. The nodes can both act with some degree of autonomy and cooperate with each other. The paper describes the concepts underlying the designed system architecture and presents the results obtained working on its components, including some simulations performed in a realistic scenario to validate the distributed target tracking algorithm.
A long range visual perception system is presented based on a multi-baseline stereo frame. The system is intended to be used onboard an autonomous vehicle operating in natural settings, such as an agricultural environment, to perform 3D scene reconstruction and segmentation tasks. First, the multi-baseline stereo sensor and the associated processing algorithms are described; then, a self-learning ground classifier is applied to segment the scene into ground and non-ground regions, using geometric features, without any a priori assumption on the terrain characteristics. Experimental results obtained with an off-road vehicle operating in an agricultural test field are presented to validate the proposed approach. It is shown that the use of a multi-baseline stereo frame allows for accurate reconstruction and scene segmentation at a wide range of viewing distances, thus increasing the overall flexibility and reliability of the perception system.
The measurement of the growth state and health status of single plants or even single parts of the plants within a crop to conduct precision farming actions is a difficult task.We address this challenge by adopting a multi-sensor suite, which can be used on several sensor-platforms. Based on experimental field studies in relevant agricultural environments, we show how the acquired hyperspectral, LIDAR, stereo and thermal image data can be processed and classified to get a comprehensive understanding of the agricultural acreage.
Accurate soil mapping is critical for a highly-automated agricultural vehicle to successfully accomplish important tasks including seeding, ploughing, fertilising and controlled traffic, with limited human supervision, ensuring at the same time high safety standards. In this research, a multi-sensor ground mapping and characterisation approach is proposed, whereby data coming from heterogeneous but complementary sensors, mounted on-board an unmanned rover, are combined to generate a multi-layer map of the environment and specifically of the supporting ground. The sensor suite comprises both exteroceptive and proprioceptive devices. Exteroceptive sensors include a stereo camera, a visible and near-infrared camera and a thermal imager. Proprioceptive data consist of the vertical acceleration of the vehicle sprung mass as acquired by an inertial measurement unit. The paper details the steps for the integration of the different sensor data into a unique multi-layer map and discusses a set of exteroceptive and proprioceptive features for soil characterisation and change detection. Experimental results obtained with an all-terrain vehicle operating on different ground surfaces are presented. It is shown that the proposed technologies could be potentially used to develop all-terrain self-driving systems in agriculture. In addition, multi-modal soil maps could be useful to feed farm management systems that would present to the user various soil layers incorporating colour, geometric, spectral and mechanical properties.
In the last decades, sensor networks have received significant attention in the field of Ambient Intelligence (AmI)for surveillance and assisted living applications, as they provide a powerful tool to capture relevant information aboutenvironments and people activities. Mobile robots hold the promise to enhance the potential of sensor networks, towards thedevelopment of intelligent systems that are able not only to detect events, but also to actively intervene on the environmentaccordingly. This paper presents a Distributed Ambient Intelligence Architecture (DAmIA) aiming at integrating multi-sensorrobotic platforms with Wireless Sensor Networks (WSNs). It is based on the Robot Operating System (ROS), and providesa flexible and scalable software infrastructure extendible to different AmI scenarios. The paper describes the proposedarchitecture and presents experimental tests, showing the feasibility of the system in the context of Ambient Assisted Living(AAL).
Reliable terrain analysis is a key requirement for a mobile robot to operate safely in challenging environments, such as in natural outdoor settings. In these contexts, conventional navigation systems that assume a priori knowledge of the terrain geometric properties, appearance properties, or both, would most likely fail, due to the high variability of the terrain characteristics and environmental conditions. In this paper, a self-learning framework for ground detection and classification is introduced, where the terrain model is automatically initialized at the beginning of the vehicle's operation and progressively updated online. The proposed approach is of general applicability for a robot's perception purposes, and it can be implemented using a single sensor or combining different sensor modalities. In the context of this paper, two ground classification modules are presented: one based on radar data, and one based on monocular vision and supervised by the radar classifier. Both of them rely on online learning strategies to build a statistical feature-based model of the ground, and both implement a Mahalanobis distance classification approach for ground segmentation in their respective fields of view. In detail, the radar classifier analyzes radar observations to obtain an estimate of the ground surface location based on a set of radar features. The output of the radar classifier serves as well to provide training labels to the visual classification module. Once trained, the vision-based classifier is able to discriminate between ground and nonground regions in the entire field of view of the camera. It can also detect multiple terrain components within the broad ground class. Experimental results, obtained with an unmanned ground vehicle operating in a rural environment, are presented to validate the system. It is shown that the proposed approach is effective in detecting drivable surface, reaching an average classification accuracy of about 80% on the entire video frame with the additional advantage of not requiring human intervention for training or a priori assumption on the ground appearance.
This paper presents a novel intelligent system forthe automatic visual inspection of vessels consisting of three processing levels: (a) data acquisition: images are collectedusing a magnetic climbing robot equipped with a low-costmonocular camera for hull inspection; (b) feature extraction: all the images are characterized by 12 features consisting ofcolor moments in each channel of the HSV space; (c) classification: a novel tool, based on an ensemble of classifiers, is proposed to classify sub-images as rust or non-rust. This paper provides a helpful roadmap to guide future research on the detection of rusting of metals using image processing.
Mobility and multi-functionality have been recognized as being basic requirements for the development of fully automated surveillance systems in realistic scenarios. Nevertheless, problems such as active control of heterogeneous mobile agents, integration of information from fixed and moving sensors for high-level scene interpretation, and mission execution are open. This paper describes recent and current research of the authors concerning the design and implementation of a multi-agent surveillance system, using static cameras and mobile robots. The proposed solution takes advantage of a distributed control architecture that allows the agents to autonomously handle general-purpose tasks, as well as more complex surveillance issues. The various agents can either take decisions and act with some degree of autonomy, or cooperate with each other. This paper presents an overview of the system architecture and of the algorithms involved in developing such an autonomous, multi-agent surveillance system.
In this research, adaptive perception for driving automation is discussed so as to enable a vehicle to automatically detect driveable areas and obstacles in the scene. It is especially designed for outdoor contexts where conventional perception systems that rely on a priori knowledge of the terrain's geometric properties, appearance properties, or both, is prone to fail, due to the variability in the terrain properties and environmental conditions. In contrast, the proposed framework uses a self-learning approach to build a model of the ground class that is continuously adjusted online to reflect the latest ground appearance. The system also features high flexibility, as it can work using a single sensor modality or a multi-sensor combination. In the context of this research, different embodiments have been demonstrated using range data coming from either a radar or a stereo camera, and adopting self-supervised strategies where monocular vision is automatically trained by radar or stereo vision. A comprehensive set of experimental results, obtained with different ground vehicles operating in the field, are presented to validate and assess the performance of the system.
This paper presents a novel multi-sensor terrain classification approach using visual and proprioceptive data, to support autonomous operations by an agricultural vehicle. The novelty of the proposed method lies in the possibility to identify the terrain type relying not only on classical appearance-based features, such as color and geometric properties, but also on contact-based features, which measure the dynamic effects related to the vehicle-terrain interaction and directly affect vehicle's mobility. Using methods from the machine learning community, it is shown that it is not only possible to classify various kinds of terrain using either sensor modality, but that these modalities are complementary to each other, and can be therefore combined to improve classification results.
In the last few years, robotic technology has been increasingly employed in agriculture to develop intelligent vehicles that can improve productivity and competitiveness. Accurate and robust environmental perception is a critical requirement to address unsolved issues including safe interaction with field workers and animals, obstacle detection in controlled traffic applications, crop row guidance, surveying for variable rate applications, and situation awareness, in general, towards increased process automation. Given the variety of conditions that may be encountered in the field, no single sensor exists that can guarantee reliable results in every scenario. The development of a multi-sensory perception system to increase the ambient awareness of an agricultural vehicle operating in crop fields is the objective of the Ambient Awareness for Autonomous Agricultural Vehicles (QUAD-AV) project. Different onboard sensor technologies, namely stereovision, LIDAR, radar, and thermography, are considered. Novel methods for their combination are proposed to automatically detect obstacles and discern traversable from non-traversable areas. Experimental results, obtained in agricultural contexts, are presented showing the effectiveness of the proposed methods.
Unmanned aerial vehicles are being increasingly used in challenging applications, such as environmental monitoring and surveying, precision agriculture, and mitigation actions in disaster sites. Visual cameras constitute an important component of a UAV sensor suite, allowing the vehicle to perceive the surrounding environment, and perform tasks such as mapping, 3D reconstruction, and path planning. In this work, a simulator reproducing the output of a visual camera transported onboarda UAV is proposed. The objective of the simulator is to assist the user in defining the most suitable camera configuration before going through field development and testing. Special focus is given to the realistic rendering of the environment. In this paper, first, the main elements of the simulator, including environment modeling, vehicle trajectory and camera modeling are described. Then, the use of the simulator to analyze the influence of some main camera parameters on the output ofthe imaging process is shown. Finally, the application of the simulator for the generation of aerial mosaics using Google Earth data is presented.
The development of intelligent surveillance systems is an active research area. In this context, mobile and multifunctional robots are generally adopted as means to reduce the environment structuring and the number of devices needed to cover a given area. Nevertheless, the number of different sensors mounted on the robot, and the number of complex tasks related to exploration, monitoring, and surveillance make the design of the overall system extremely challenging. In this paper, we present our autonomous mobile robot for surveillance of indoor environments. Our approach proposes a system to autonomously handle general purpose tasks as well as complex surveillance issues simultaneously. It is shown that the proposed robotic surveillance scheme successfully addresses a number of basic problems related to environment mapping, localization and autonomous navigation, as well as surveillance tasks, like scene processing to detect abandoned or removed objects and people detection and following. The feasibility of the approach is demonstrated through experimental tests using a multisensor platform equipped with a monocular camera, a laser scanner, encoders, and an RFID device. Real world applications of the proposed system include surveillance of wide areas (e.g. airports and museums) and buildings, and monitoring of safety equipment.
Ground segmentation is critical for a mobile robot to successfully accomplish its tasks in challenging environments. In this paper, we propose a self-supervised radar-vision classification system that allows an autonomous vehicle, operating in natural terrains, to automatically construct online a visual model of the ground and perform accurate ground segmentation. The system features two main phases: the training phase and the classification phase. The training stage relies on radar measurements to drive the selection of ground patches in the camera images, and learn online the visual appearance of the ground. In the classification stage, the visual model of the ground can be used to perform high level tasks such as image segmentation and terrain classification, as well as to solve radar ambiguities. The proposed method leads to the following main advantages: (a) a self-supervised training of the visual classifier, where the radar allows the vehicle to automatically acquire a set of ground samples, eliminating the need for time-consuming manual labeling; (b) the ground model can be continuously updated during the operation of the vehicle, thus making it feasible the use of the system in long range and long duration navigation applications. This paper details the proposed system and presents the results of experimental tests conducted in the field by using an unmanned vehicle.
The Risso's dolphin is a widely distributed species, found in deep temperate and tropical waters. Estimates of its abundance are available in a few regions, details of its distribution are lacking, and its status in the Mediterranean Sea is ranked as Data Deficient by the IUCN Red List. In this paper, a synergy between bio-ecological analysis and innovative strategies has been applied to construct a digital platform, DolFin. It contains a collection of sighting data and geo-referred photos of Grampus griseus, acquired from 2013 to 2016 in the Gulf of Taranto (Northern Ionian Sea, North-eastern Central Mediterranean Sea), and the first automated tool for Smart Photo Identification of the Risso's dolphin (SPIR). This approach provides the capability to collect and analyse significant amounts of data acquired over wide areas and extended periods of time. This effort establishes the baseline for future large-scale studies, essential to providing further information on the distribution of G. griseus. Our data and analysis results corroborate the hypothesis of a resident Risso's dolphin population in the Gulf of Taranto, showing site fidelity in a relatively restricted area characterized by a steep slope to around 800 m in depth, north of the Taranto Valley canyon system.
In the last few years, driver assistance systems are increasingly being investigated in the automotive field to provide a higher degree of safety and comfort. Lane position determination plays a critical role toward the development of autonomous and computer-aided driving. This paper presents an accurate and robust method for detecting road markings with applications to autonomous vehicles and driver support. Much like other lane detection systems, ours is based on computer vision and Hough transform. The proposed approach, however, is unique in that it uses fuzzy reasoning to combine adaptively geometrical and intensity information of the scene in order to handle varying driving and environmental conditions. Since our system uses fuzzy logic operations for lane detection and tracking, we call it "FLane." This paper also presents a method for building the initial lane model in real time, during vehicle motion, and without any a priori information. Details of the main components of the FLane system are presented along with experimental results obtained in the field under different lighting and road conditions.
This poster presents the mapping and semantic labelling of vineyards from a caterpillar transport system using a compact sensor suite containing RGB stereo imaging, thermography, two hyperspectral cameras.
Nowadays, Internet of Things (IoT) and robotic systems are key drivers of technological innovation trends. Leveragingthe advantages of both technologies, IoT-aided robotic systems can disclose a disruptive potential of opportunities The presentcontribution provides an experimental analysis of an IoT-aided robotic system for environmental monitoring. To this end, an experimentaltestbed has been developed. It is composed of: (i) an IoT device connected to (ii) a Unmanned Aerial Vehicle (UAV) whichexecutes a patrolling mission within a specified area where (iii) an IoT network has been deployed to sense environmental data.An extensive experimental campaign has been carried out to scavenge pros and cons of adopted technologies. The key resultsof our analysis show that: (i) the UAV does not incur any significant overhead due to on board IoT equipment and (ii) the overallQuality of Service (QoS) expressed in terms of network joining time, data retrieval delay and Packet Loss Ratio (PLR) satisfies themission requirements. These results enable further development in larger scale environment.
Plant phenotyping, that is, the quantitative assessment of plant traits including growth, morphology, physiology, and yield, is a critical aspect towards efficient and effective crop management. Currently, plant phenotyping is a manually intensive and time consuming process, which involves human operators making measurements in the field, based on visual estimates or using hand-held devices. In this work, methods for automated grapevine phenotyping are developed, aiming to canopy volume estimation and bunch detection and counting. It is demonstrated that both measurements can be effectively performed in the field using a consumer-grade depth camera mounted on-board an agricultural vehicle. First, a dense 3D map of the grapevine row, augmented with its color appearance, is generated, based on infrared stereo reconstruction. Then, different computational geometry methods are applied and evaluated for plant per plant volume estimation. The proposed methods are validated through field tests performed in a commercial vineyard in Switzerland. It is shown that different automatic methods lead to different canopy volume estimates meaning that new standard methods and procedures need to be defined and established. Four deep learning frameworks, namely the AlexNet, the VGG16, the VGG19 and the GoogLeNet, are also implemented and compared to segment visual images acquired by the RGB-D sensor into multiple classes and recognize grape bunches. Field tests are presented showing that, despite the poor quality of the input images, the proposed methods are able to correctly detect fruits, with a maximum accuracy of 91.52%, obtained by the VGG19 deep neural network.
Environment awareness through advanced sensing systems is a major requirement for a mobile robot to operate safely, particularly when the environment is unstructured, as in an outdoor setting. In this paper, a multi-sensory approach is proposed for automatic traversable ground detection using 3D range sensors. Specifically, two classifiers are presented, one based on laser data and one based on stereovision. Both classifiers rely on a self-learning scheme to detect the general class of ground and feature two main stages: an adaptive training stage and a classification stage. In the training stage, the classifier learns to associate geometric appearance of 3D data with class labels. Then, it makes predictions based on past observations. The output obtained from the single-sensor classifiers is statistically combined exploiting their individual advantages in order to reach an overall better performance than could be achieved by using each of them separately. Experimental results, obtained with a test bed platform operating in a rural environment, are presented to validate this approach, showing its effectiveness for autonomous safe navigation.
Reliable assessment of terrain traversability using multi-sensory input is a key issue fordriving automation, particularly when the domain is unstructured or semi-structured, asin natural environments. In this paper, LIDAR-stereo combination is proposed to detecttraversable ground in outdoor applications. The system integrates two self-learning classi-ers, one based on LIDAR data and one based on stereo data, to detect the broad class ofdrivable ground. Each single-sensor classier features two main stages: an adaptive trainingstage and a classication stage. During the training stage, the classier automaticallylearns to associate geometric appearance of 3D data with class labels. Then, it makes predictionsbased on past observations. The output obtained from the single-sensor classiersare statistically combined in order to exploit their individual strengths and reach an overallbetter performance than could be achieved by using each of them separately. Experimentalresults, obtained with a test bed platform operating in rural environments, are presented tovalidate and assess the performance of this approach, showing its eectiveness and potentialapplicability to autonomous navigation in outdoor contexts.
In recent years, the study of sensor and robot networks for Ambient Assisted Living applications has gained a growing attention. In this work, a multimodal user interface for a distributed ambient intelligence architecture is presented. The user interface has been designed with in mind the specific characteristics of each user and the easiness of use of functions even for people without familiarity with technology. The system has been tested with elderly users: the acquired experience allows the analysis of problems like user acceptance, usability and suitability of these systems. The tests have shown a good users' feeling towards the interface, especially in relation to the voice interface.
An autonomous mobile robotic system for surveillance of indoor environments is presented. Applications of the proposed system include surveillance of large environments such as airports, museums and warehouses. A multi-layer decision scheme controls different surveillance tasks. In particular, this paper focuses on two main functions: building an augmented map of the environment and monitoring specific areas of interest to detect unexpected changes based on visual and laser data. The effectiveness of the system is demonstrated through experimental tests. The results are promising, proving the proposed methods to be successful in detecting either new or removed objects in the surveyed scene. It is also shown that the robotic surveillance system is able to address a number of specific problems related to environment mapping, autonomous navigation and scene processing, and could be effectively employed for real-world surveillance applications.
This paper presents a radar-vision classification approach to segment the visual scene into ground and nonground regions. The proposed system features two main phases: a radar-supervised training phase and a visual classification phase. The training stage relies on a radar-based classifier to drive the selection of ground patches in the camera images, and learn online the visual appearance of the ground. In the classification stage, the visual model of the ground is used for image segmentation. Experimental results, obtained with an unmanned ground vehicle operating in a rural environment, are presented to validate the proposed system.
We live in the era of the fourth industrial revolution, where everything - from small objects to entire factories - is smart and connected, and we are also strongly accustomed to comforts and services, but emergent questions are arising. What are the consequences of human activities on terrestrial and aquatic/marine systems? And how does the loss of biodiversity alter the integrity and functioning of ecosystems? It is reasonable to assert that there are correlations between the anthropic pressure and degradation of natural habitats and loss in biodiversity. In fact, the alteration of ecosystem structure affects ecosystem services and resilience, the level of perturbation that an ecosystem can withstand without shifting to an alternative status providing fewer benefits to humans [1]. To that regards, the research studies on cetacean species distribution and conservation status along with their habitats can give an idea of the current impact of human pressure on marine biodiversity and its ecosystem services, being both dolphins and whales key species in the marine food webs. However, although the inherent complexity of food-web dynamics often makes difficult to investigate and quantify the role of marine mammals in the ecosystem [2], the challenge to investigate their ecological significance is leading and highly informative when facing human induced environmental changes from local to global scales. For this reason, dedicated research activities have been performed in the last years to standardize the best practices for sampling and collecting scientific relevant information on the cetaceans in the Gulf of Taranto (Northern Ionian Sea in the Central-Eastern Mediterranean Sea) [3, 4, 5, 6]. Standardized scientific protocols and technological innovations have been brought by integrating interdisciplinary approaches: a genetic study on dolphin's social structure, an automated photo-identification, assisted by intelligent unsupervised algorithms and the study of acoustic signals. Finally, education and citizen science were applied as fundamental to raise awareness on the need of marine environmental protection among the active population, from children to adults.
Purpose - The purpose of this paper is to address the use of passive RFID technology for the development of an autonomous surveillance robot. Passive RFID tags can be used for labelling both valued objects and goal-positions that the robot has to reach in order to inspect the surroundings. In addition, the robot can use RFID tags for navigational purposes, such as to keep track of its pose in the environment. Automatic tag position estimation is, therefore, a fundamental task in this context. Design/methodology/approach - The paper proposes a supervised fuzzy inference system to learn the RFID sensor model; Then the obtained model is used by the tag localization algorithm. Each tag position is estimated as the most likely among a set of candidate locations.
Periodic inspection of large tonnage vessels is critical to assess integrity and prevent structural failures that could have catastrophic consequences for people and the environment. Currently, inspection operations are undertaken by human surveyors, often in extreme conditions. This paper presents an innovative system for the automatic visual inspection of ship hull surfaces, using a Magnetic Climbing Robot (MARC) equipped with a low-cost monocular camera.
Autonomous driving is a challenging problem in mobile robotics, particularly when the domain is unstructured, as in an outdoor setting. In addition, field scenarios are often characterized by low visibility as well, due to changes in lighting conditions, weather phenomena including fog, rain, snow and hail, or the presence of dust clouds and smoke. Thus, advanced perception systems are primarily required for an off-road robot to sense and understand its environment recognizing artificial and natural structures, topology, vegetation and paths, while ensuring, at the same time, robustness under compromised visibility. In this paper the use of millimeter-wave radar is proposed as a possible solution for all-weather off-road perception. A self-learning approach is developed to train a classifier for radar image interpretation and autonomous navigation. The proposed classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate the appearance of radar data with class labels. Then, it makes predictions based on past observations. The training set is continuously updated online using the latest radar readings, thus making it feasible to use the system for long range and long duration navigation, over changing environments. Experimental results, obtained with an unmanned ground vehicle operating in a rural environment, are presented to validate this approach. A quantitative comparison with laser data is also included showing good range accuracy and mapping ability as well. Finally, conclusions are drawn on the utility of millimeter-wave radar as a robotic sensor for persistent and accurate perception in natural scenarios.
A multi-sensor approach for terrain estimation is proposed using a combination of complementary optical sensors that cover the visible (VIS), near infrared (NIR) and infrared (IR) spectrum. The sensor suite includes a stereovision sensor, a VIS-NIR camera and a thermal camera, and it is intended to be mounted on board an agricultural vehicle, pointing downward to scan the portion of the terrain ahead. A method to integrate the different sensor data and create a multi-modal dense 3D terrain map is presented. The stereovision input is used to generate 3D point clouds that incorporate RGB-D information, whereas the VIS-NIR camera and the thermal sensor are employed to extract respectively spectral signatures and temperature information, to characterize the nature of the observed surfaces. Experimental tests carried out by an off-road vehicle are presented, showing the feasibility of the proposed approach.
RFID sensor modelling has been recognized as a fundamental step towards successful application of RFID technology in mobile robotics tasks, such as localization and environment mapping. In this paper, we propose a novel approach to passive RFID modelling, using fuzzy reasoning. Specifically, the RFID sensor model is defined as a combination of an RSSI model and a Tag Detection Model, both of which are learnt based on an Adaptive Neuro Fuzzy Inference System (ANFIS). Fuzzy C-Means (FCM) algorithm is applied to automatically cluster sample data into classes and obtain initial data memberships for ANFIS initialization and training. Experimental results from tests performed in our Mobile Robotics Lab are presented, showing the effectiveness of the proposed method.
Advances in precision agriculture greatly rely on innovative control and sensing technologies that allow service units to increase their level of driving automation while ensuring at the same time high safety standards. This paper deals with automatic terrain estimation and classification that is performed simultaneously by an agricultural vehicle during normal operations. Vehicle mobility and safety, and the successful implementation of important agricultural tasks including seeding, plowing, fertilising and controlled traffic depend or can be improved by a correct identification of the terrain that is traversed. The novelty of this research lies in that terrain estimation is performed by using not only traditional appearance-based features, that is colour and geometric properties, but also contact-based features, that is measuring physics-based dynamic effects that govern the vehicle-terrain interaction and that greatly affect its mobility. Experimental results obtained from an all-terrain vehicle operating on different surfaces are presented to validate the system in the field. It was shown that a terrain classifier trained with contact features was able to achieve a correct prediction rate of 85.1%, which is comparable or better than that obtained with approaches using traditional feature sets. To further improve the classification performance, all feature sets were merged in an augmented feature space, reaching, for these tests, 89.1% of correct predictions.
A poster about the use of marsupial robots and vehicles for next-genaration of missions in polar regions
Autonomous vehicles are being increasingly adopted in agriculture to improve productivity and efficiency. For an autonomous agricultural vehicle to operate safely, environment perception and interpretation capabilities are fundamental requirements. The Ambient Awareness for Autonomous Agricultural Vehicles (QUAD-AV) project explores a multisensory approach to provide an autonomous agricultural vehicle with such ambient awareness. The proposed methods and systems will aim at increasing the overall level of safety of an autonomous agricultural vehicle with respect to itself, to people and animals as well as to property. The "obstacle detection" problem is specifically addressed within the QUAD-AV project. The paper focuses on the presentation of the different selected technologies (vision/stereovision, thermography, ladar, microwave radar) through the presentation of preliminary results.
Autonomous driving is a challenging problem, particularly when the domain is unstructured, as in an outdoor agricultural setting. Thus, advanced perception systems are primarily required to sense and understand the surrounding environment recognizing artificial and natural structures, topology, vegetation and paths. In this paper, a self-learning framework is proposed to automatically train a ground classifier for scene interpretation and autonomous navigation based on multi-baseline stereovision. The use of rich 3D data is emphasized where the sensor output includes range and color information of the surrounding environment. Two distinct classifiers are presented, one based on geometric data that can detect the broad class of ground and one based on color data that can further segment ground into subclasses. The geometry-based classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate geometric appearance of 3D stereo-generated data with class labels. Then, it makes predictions based on past observations. It serves as well to provide training labels to the color-based classifier. Once trained, the color-based classifier is able to recognize similar terrain classes in stereo imagery. The system is continuously updated online using the latest stereo readings, thus making it feasible for long range and long duration navigation, over changing environments. Experimental results, obtained with a tractor test platform operating in a rural environment, are presented to validate this approach, showing an average classification precision and recall of 91.0% and 77.3%, respectively.
Accurate and robust environmental perception is a critical requirement to address unsolved issues in the context of outdoor navigation, including safe interaction with living beings, obstacle detection, cooperation with other vehicles, mapping, and situation awareness in general. Aim of this paper is the development of perception algorithms to enhance the automatic understanding of the environment and develop advanced driving assistance systems for off-road vehicles. Specifically, the problem of terrain traversability assessment is addressed. Two strategies are presented. One exploits stereo data to segment drivable ground using a self-learning approach, without explicitly dealing with the obstacle detection issue, whereas the other one features a radar-stereo integrated system to detect and characterize obstacles. The paper details both methods and presents experimental results, obtained with a vehicle operating in rural and agricultural contexts.
Imaging sensors are being increasingly used in autonomous vehicle applications for scene understanding. This paper presents a method that combines radar and monocular vision for ground modeling and scene segmentation by a mobile robot operating in outdoor environments. The proposed system features two main phases: a radar-supervised training phase and a visual classification phase. The training stage relies on radar measurements to drive the selection of ground patches in the camera images, and learn online the visual appearance of the ground. In the classification stage, the visual model of the ground can be used to perform high level tasks such as image segmentation and terrain classification, as well as to solve radar ambiguities. This method leads to the following main advantages: (a) self-supervised training of the visual classifier across the portion of the environment where radar overlaps with the camera field of view. This avoids time-consuming manual labeling and enables on-line implementation; (b) the ground model can be continuously updated during the operation of the vehicle, thus making feasible the use of the system in long range and long duration applications. This paper details the algorithms and presents experimental tests conducted in the field using an unmanned vehicle.
For mobile robots driving across natural terrains, it is critical that the dynamic effects occurring at the wheel-terrain interface be taken into account. One of the most prevalent of these effects is wheel sinkage. Wheels can sink to depths sufficient to prevent further motion, possibly leading to danger of entrapment. This paper presents an algorithm for visual estimation of sinkage. We call it the visual sinkage estimation (VSE) method. It assumes the presence of a monocular camera and an artificial pattern, attached to the wheel side, to determine the terrain contact angle. This paper also introduces an analytical model for wheel sinkage in deformable terrain based on terramechanics. To validate the VSE module, firstly, several tests are performed on a single-wheel test bed, under different operating conditions. Secondly, the effectiveness of the proposed approach is proved in real contexts, employing an all-terrain rover travelling on a sandy beach.
Condividi questo sito sui social