Effettua una ricerca
Grazia Cicirelli
Ruolo
III livello - Tecnologo
Organizzazione
Consiglio Nazionale delle Ricerche
Dipartimento
Non Disponibile
Area Scientifica
AREA 09 - Ingegneria industriale e dell'informazione
Settore Scientifico Disciplinare
ING-INF/05 - Sistemi di Elaborazione delle Informazioni
Settore ERC 1° livello
PE - PHYSICAL SCIENCES AND ENGINEERING
Settore ERC 2° livello
PE6 Computer Science and Informatics: Informatics and information systems, computer science, scientific computing, intelligent systems
Settore ERC 3° livello
PE6_11 Machine learning, statistical data processing and applications using signal processing (e.g. speech, image, video)
In this paper a fast and innovative three-dimensional vision system, having high resolution in the surface reconstruction, is discussed. It is based on a triangulation 3D laser scanner with a linear beam shape. The high precision (few microns) is guaranteed by very small laser line width, small camera pixel-size and proper optical properties of the Telecentric Lens. The entire system has been tested on two kinds of sample objects such as a 20 cent coin and a set of precision drilling tools. The main purpose of this work is the detection and reconstruction of the 3D surface of tiny objects and the measurement of their surface defects with high accuracy. Furthermore the occlusion problem is faced and solved by properly handling the camera-laser setup. Experimental tests prove the high precision of the system that can reach a resolution of 15 ?m. © 2013 IEEE.
Distributed networks of sensors have been recognized to be a powerful tool for developing fully automated systems that monitor environments and human activities. Nevertheless, problems such as active control of heterogeneous sensors for high-level scene interpretation and mission execution are open. This paper presents the authors' ongoing research about design and implementation of a distributed heterogeneous sensor network that includes static cameras and multi-sensor mobile robots. The system is intended to provide robot-assisted monitoring and surveillance of large environments. The proposed solution exploits a distributed control architecture to enable the network to autonomously accomplish general-purpose and complex monitoring tasks. The nodes can both act with some degree of autonomy and cooperate with each other. The paper describes the concepts underlying the designed system architecture and presents the results obtained working on its components, including some simulations performed in a realistic scenario to validate the distributed target tracking algorithm.
In this paper, we present a gesture recognition system for the development of a human-robot interaction (HRI) interface. Kinect cameras and the OpenNI framework are used to obtain real-time tracking of a human skeleton. Ten different gestures, performed by different persons, are defined. Quaternions of joint angles are first used as robust and significant features. Next, neural network (NN) classifiers are trained to recognize the different gestures. This work deals with different challenging tasks, such as the real-time implementation of a gesture recognition system and the temporal resolution of gestures. The HRI interface developed in this work includes three Kinect cameras placed at different locations in an indoor environment and an autonomous mobile robot that can be remotely controlled by one operator standing in front of one of the Kinects. Moreover, the system is supplied with a people re-identification module which guarantees that only one person at a time has control of the robot. The system's performance is first validated offline, and then online experiments are carried out, proving the real-time operation of the system as required by a HRI interface.
Service robots are expected to be used in many household in the near future, provided that proper interfaces are developed for the human robot interaction. Gesture recognition has been recognized as a natural way for the communication especially for elder or impaired people. With the developments of new technologies and the large availability of inexpensive depth sensors, real time gesture recognition has been faced by using depth information and avoiding the limitations due to complex background and lighting situations. In this paper the Kinect Depth Camera, and the OpenNI framework have been used to obtain real time tracking of human skeleton. Then, robust and significant features have been selected to get rid of unrelated features and decrease the computational costs. These features are fed to a set of Neural Network Classifiers that recognize ten different gestures. Several experiments demonstrate that the proposed method works effectively. Real time tests prove the robustness of the method for realization of human robot interfaces. Copyright © 2014 SCITEPRESS.
Mobility and multi-functionality have been recognized as being basic requirements for the development of fully automated surveillance systems in realistic scenarios. Nevertheless, problems such as active control of heterogeneous mobile agents, integration of information from fixed and moving sensors for high-level scene interpretation, and mission execution are open. This paper describes recent and current research of the authors concerning the design and implementation of a multi-agent surveillance system, using static cameras and mobile robots. The proposed solution takes advantage of a distributed control architecture that allows the agents to autonomously handle general-purpose tasks, as well as more complex surveillance issues. The various agents can either take decisions and act with some degree of autonomy, or cooperate with each other. This paper presents an overview of the system architecture and of the algorithms involved in developing such an autonomous, multi-agent surveillance system.
The development of intelligent surveillance systems is an active research area. In this context, mobile and multifunctional robots are generally adopted as means to reduce the environment structuring and the number of devices needed to cover a given area. Nevertheless, the number of different sensors mounted on the robot, and the number of complex tasks related to exploration, monitoring, and surveillance make the design of the overall system extremely challenging. In this paper, we present our autonomous mobile robot for surveillance of indoor environments. Our approach proposes a system to autonomously handle general purpose tasks as well as complex surveillance issues simultaneously. It is shown that the proposed robotic surveillance scheme successfully addresses a number of basic problems related to environment mapping, localization and autonomous navigation, as well as surveillance tasks, like scene processing to detect abandoned or removed objects and people detection and following. The feasibility of the approach is demonstrated through experimental tests using a multisensor platform equipped with a monocular camera, a laser scanner, encoders, and an RFID device. Real world applications of the proposed system include surveillance of wide areas (e.g. airports and museums) and buildings, and monitoring of safety equipment.
In this paper we present a reliable method to derive the differences between indoor environments using the comparison of high-resolution range images. Samples belonging to different acquisitions are firstly reduced preserving the topology of the scenes and then registered in the same system of reference through an iterative least-squares algorithm, aided by a deletion mask, whose assignment is the removal of implicit errors due to the different points of view of each orthographic acquisition. Finally the analysis of the exact range measures returns an intuitive difference map that allows the fast detection of the positions of the altered regions within the scenes. Numerical experiments are presented to prove the capability of the method for the comparison of scenes regardless the resolution of the sensor and the input noise level of such measurements. © 2013 IEEE.
In this paper the problem of distributed target tracking is considered. A network of heterogeneous sensing agents is used to observe a maneuvering target and, at each iteration, all the agents are able to agree about the estimate of the target position, despite the fact that only a small percentage of agents can sense the target at each time instant. Our Consensus-based Distributed Target Tracking (CDTT) is a fully distributed iterative tracking algorithm, in which each iteration is based on two phases: an estimation phase and a consensus one. As a result, the estimated trajectories are identical for all the agents at each time instant. Numerical simulations and comparison with another target tracking algorithm are carried out to show the effectiveness and feasibility of our approach. © 2011 IEEE.
In this paper the problem of distributed target tracking is considered. A network of agents is used to observe a mobile target and, at each iteration, all the agents agree about the estimate of the target position, despite the fact that they only have local interactions and only a small percentage of them can sense the target. The proposed approach, named Consensus-based Distributed Target Tracking (CDTT), is a fully distributed iterative tracking algorithm. At each iteration our method applies two phases. During the perception phase the target position is obtained either as a measure or as a prediction; subsequently, in the consensus phase a consensus algorithm is applied in order to let all the agents agree on the target position. As a result, the estimated trajectories are identical for all the agents. Numerical simulations are carried out to show the effectiveness and feasibility of our approach. © 2011 IEEE.
A high-resolution vision system for the inspection of drilling tools is presented. A triangulation-based laser scanner is used to extract a three-dimensional model of the target aimed to the fast detection and characterization of surface defects. The use of two orthogonal calibrated handlings allows the achievement of precisions of the order of few microns in the whole testing volume and the prevention of self-occlusions induced on the undercut surfaces of the tool. Point cloud registration is also derived analytically to increase to strength of the measurement scheme, whereas proper filters are used to delete samples whose quality is below a reference threshold. Experimental tests are performed on calibrated spheres and different-sized tools, proving the capability of the presented setup to entirely reconstruct complex targets with maximum absolute errors between the estimated distances and the corresponding nominal values below 12 mu m.
High resolution in distance (range) measurements can be achieved by means of accurate instrumentations and precise analytical models. This paper reports an improvement in the estimation of distance measurements performed by an omnidirectional range sensor already presented in literature. This sensor exploits the principle of laser triangulation, together with the advantages brought by catadioptric systems, which allow the reduction of the sensor size without decreasing the resolution. Starting from a known analytical model in two dimensions (2D), the paper shows the development of a fully 3D formulation where all initial constrains are removed to gain in measurement accuracy. Specifically, the ray projection problem is solved by considering that both the emitter and the receiver have general poses in a global system of coordinates. Calibration is thus made to estimate their poses and compensate for any misalignment with respect to the 2D approximation. Results prove an increase in the measurement accuracy due to the more general formulation of the problem, with a remarkable decrease of the uncertainty.
An autonomous mobile robotic system for surveillance of indoor environments is presented. Applications of the proposed system include surveillance of large environments such as airports, museums and warehouses. A multi-layer decision scheme controls different surveillance tasks. In particular, this paper focuses on two main functions: building an augmented map of the environment and monitoring specific areas of interest to detect unexpected changes based on visual and laser data. The effectiveness of the system is demonstrated through experimental tests. The results are promising, proving the proposed methods to be successful in detecting either new or removed objects in the surveyed scene. It is also shown that the robotic surveillance system is able to address a number of specific problems related to environment mapping, autonomous navigation and scene processing, and could be effectively employed for real-world surveillance applications.
People tracking is a central and crucial point for the development of intelligent surveillance systems. When multiple cameras are used, the problem becomes more challenging as people re-identification is needed. Humans can greatly change their appearance according to posture, clothing and lighting conditions, thus defining features that describe people moving in large scenarios is a complex task. In this paper the problem of people re-identification and tracking is reviewed. The most used methodologies are discussed and insight into open problems and future research directions is provided. © 2012 IEEE.
This paper tackles the problem of people re-identification by using soft biometrics features. The method works on RGB-D data (color point clouds) to determine the best matching among a database of possible users. For each subject under testing, skeletal information in three-dimensions is used to regularize the pose and to create a skeleton standard posture (SSP). A partition grid, whose sizes depend on the SSP, groups the samples of the point cloud accordingly to their position. Every group is then studied to build the person signature. The same grid is then used for the other subjects of the database to preserve information about possible shape differences among users. The effectiveness of this novel method has been tested on three public datasets. Numerical experiments demonstrate an improvement of results with reference to the current state-of-the-art, with recognition rates of 97.84% (on a partition of BIWI RGBD-ID), 61.97% (KinectREID) and 89.71% (RGBD-ID), respectively.
In this paper we present a natural humancomputer interface based on gesture recognition. The principal aimis to study how different personalized gestures, defined by users,can be represented in terms of features and can be modelled byclassification approaches in order to obtain the best performancesin gesture recognition. Ten different gestures involving themovement of the left arm are performed by different users.Different classification methodologies (SVM, HMM, NN, and DTW) arecompared and their performances and limitations are discussed. Anensemble of classifiers is proposed to produce more favorableresults compared to those of a single classifier system. Theproblems concerning different lengths of gesture executions,variability in their representations, generalization ability ofthe classifiers have been analyzed and a valuable insight inpossible recommendation is provided.
This paper analyzes with a new perspective the recent state of-the-art on gesture recognition approaches that exploit both RGB and depth data (RGB-D images). The most relevant papers have been analyzed to point out which features and classifiers best work with depth data, if these fundamentals are specifically designed to process RGB-D images and, above all, how depth information can improve gesture recognition beyond the limit of standard approaches based on solely color images. Papers have been deeply reviewed finding the relation between gesture complexity and features/methodologies suitability. Different types of gestures are discussed, focusing attention on the kind of datasets (public or private) used to compare results, in order to understand weather they provide a good representation of actual challenging problems, such as: gesture segmentation, idle gesture recognition, and length gesture invariance. Finally the paper discusses on the current open problems and highlights the future directions of research in the field of processing of RGB-D data for gesture recognition.
Purpose - The purpose of this paper is to address the use of passive RFID technology for the development of an autonomous surveillance robot. Passive RFID tags can be used for labelling both valued objects and goal-positions that the robot has to reach in order to inspect the surroundings. In addition, the robot can use RFID tags for navigational purposes, such as to keep track of its pose in the environment. Automatic tag position estimation is, therefore, a fundamental task in this context. Design/methodology/approach - The paper proposes a supervised fuzzy inference system to learn the RFID sensor model; Then the obtained model is used by the tag localization algorithm. Each tag position is estimated as the most likely among a set of candidate locations.
RFID sensor modelling has been recognized as a fundamental step towards successful application of RFID technology in mobile robotics tasks, such as localization and environment mapping. In this paper, we propose a novel approach to passive RFID modelling, using fuzzy reasoning. Specifically, the RFID sensor model is defined as a combination of an RSSI model and a Tag Detection Model, both of which are learnt based on an Adaptive Neuro Fuzzy Inference System (ANFIS). Fuzzy C-Means (FCM) algorithm is applied to automatically cluster sample data into classes and obtain initial data memberships for ANFIS initialization and training. Experimental results from tests performed in our Mobile Robotics Lab are presented, showing the effectiveness of the proposed method.
Condividi questo sito sui social