Effettua una ricerca
Berardina De Carolis
Ruolo
Ricercatore
Organizzazione
Università degli Studi di Bari Aldo Moro
Dipartimento
DIPARTIMENTO DI INFORMATICA
Area Scientifica
AREA 01 - Scienze matematiche e informatiche
Settore Scientifico Disciplinare
INF/01 - Informatica
Settore ERC 1° livello
Non Disponibile
Settore ERC 2° livello
Non Disponibile
Settore ERC 3° livello
Non Disponibile
This paper illustrates our work concerning the development of a layered architecture for deciding the situation-aware behavior of a Smart Home Environment (SHE). In the proposed approach, the surface level is directly embedded in the environment, while deeper levels represent the control software and perform progressively abstract and conceptual activities whose results can be fed back to the outside world (environment, user, supervisor). In particular, the reasoning layer is in charge of interpreting and transforming the data, collected through sensors of the smart environment, into high level knowledge about the situation. On the other hand, the learning layer, based on Inductive Logic Programming, suitably exploits the interaction of the user with the system to refine the user model and improve its future behavior. Finally, we provide the description of a typical scenario in which the proposed architecture might operate, along with a practical example of how the system might work.
Adapting the behavior of a smart environment means to tailor its functioning to both context situation and users’ needs and preferences. In this paper we propose an agent-based approach for controlling the behavior of a Smart Environment that, based on the recognized situation and user goal, selects a suitable workflow for combining services of the environment. We use the metaphor of a butler agent that employs user and context modeling to support proactive adaptation of the interaction with the environment. The interaction is adapted to every specific situation the user is in thanks to a class of agents called Interactor Agents.
This paper proposes an agent-based approach for proactively adapting the behavior of a Smart Environment that, based on the recognized situation and user goal, selects a suitable workflow for combining services of the environment. To this aim we have developed a multiagent infrastructure composed by different classes of agents specialized in reasoning and learning about the user and the context at different abstraction levels.
This chapter deals with adaptation of background information and advertisements, displayed in an environment, to the interests of the group of people present. According to research on computational advertising, it is important to develop methods for finding the “best match” between user interests in a given context and available advertisements. Accordingly, after providing an overview of the most popular group recommender approaches, this chapter looks at new issues that arise when considering group modeling in pervasive advertising conveyed through digital displays. The chapter first discusses general issues concerning group recommender systems, with particular emphasis on the acquisition of user preferences and interests. A system called GAIN (Group Adaptive Information and News) is then presented. This was developed with the aim of tailoring the display of background information and advertisements to groups of people.
In this paper we propose an agent-based approach for controlling the behavior of a Smart Home Environment that, based on the recognized situation and user goal, selects a suitable workflow for combining services of the environment. To this aim we have developed a butler agent that employs user and context modeling for supporting proactive adaptation of the interaction with the house. The user can interact with the proposed services by accepting, declining or changing them. Such a feedback is exploited by the learning component of the butler to refine the user model and improve its future behavior accordingly. In order to provide a description of how the system might work, a practical example is shown.
n this paper, we investigate the user's reactions to received suggestion by an Embodied Conversational Agent playing the role of artificial therapist in the healthy eating domain. Specifically, we analyse the behaviour of people who voluntarily requested to receive information from the agent, and we compare it with the results of a previous evaluation experiment in which subjects were not properly motivated to interact with the agent because they were selected for evaluating the system. This study is part of an ongoing research aimed at developing an intelligent virtual agent that applies natural argumentation techniques to persuade the users to improve their eating habits.
In this paper we present CoRSAR, a mobile recommender system for the tourism domain in Augmented Reality. It allows the users to explore and visit a city and provides recommendation of Point of Interests (POIs) by combining collaborative filtering and context-awareness. In this paper, besides describing the system, we present the results of a study aiming at evaluating if users were more satisfied with the system recommendations when context features were taken into account. Results show that users provided a better evaluation of the system when the context-aware approach was adopted rather then the simple collaborative filtering one.
Abstract Personalized systems traditionally used the traces of user interactions to learn the user model, which was used by sophisticated algorithms to choose the appropriate content for the user and the situation. Recently, new types of user models started to emerge, which take into account more user-centric information, such as emotions and personality. Initially, these models were conceptually interesting but of little practical value as emotions and personality were difficult to acquire. However, with the recent advancement in unobtrusive ...
Embedding the HCI technology with human preferences and behaviour justifies the attempt of implementing emotional and social intelligence aimed at exceeding the single ability to help the user. In this paper we present an Embodied Conversa-tional Agent’s (ECA’s) architecture and methods useful to interpret the user affec-tive attitude during her dialog with an ECA and behaving 'believably' in its turn. In particular, we present an agent architecture that is general enough to be applied in several application domains and that can employ several ECA’s bodies according to the context requirements.
The current tools to create OWL-S annotations have been designed starting from the knowledge engineer’s point of view. Unfortunately, the formalisms underlying Semantic Web languages are often incomprehensible to the developers of Web services. To bridge this gap, it is desirable that developers are provided with suitable tools that do not necessarily require knowledge of these languages in order to create annotations on Web services. With reference to some characteristics of the involved technologies, this work addresses these issues, proposing guidelines that can improve the annotation activity of Web service developers. Following these guidelines, we also designed a tool that allows a Web service developer to annotate Web services without requiring him to have a deep knowledge of Semantic Web languages. A prototype of such a tool is presented and discussed in this paper.
The availability of automatic support may sometimes determine the successful accomplishment of a process. Such a support can be provided if a model of the intended process is available. Many realworld process models are very complex. Additionally, their components might be associated to conditions that determine whether they are to be carried out or not. These conditions may be in turn very complex, involving sequential relationships that take into account the past history of the current process execution. In this landscape, writing and setting up manually the process models and conditions might be infeasible, and even standard Machine Learning approaches may be unable to infer them. This paper presents a First-Order Logic-based approach to learn complex process models extended with conditions. It combines two powerful Inductive Logic Programming systems. The overall system was exploited to learn the daily routines of the user of a smart environment, for predicting his needs and comparing the actual situation with the expected one. In addition to proving the efficiency and effectiveness of the system, the outcomes show that complex, human-readable and interesting preconditions can be learned for the tasks involved in the process.
Understanding what the user is doing in a Smart Environment is important not only for adapting the environment behavior, e.g. by providing the most appropriate combination of services for the recognized situation, but also for identifying situations that could be problematic for the user. Manually building models of the user processes is a complex, costly and error-prone engineering task. Hence, the interest in automatically learning them from examples of actual procedures. Incremental adaptation of the models, and the ability to express/learn complex conditions on the involved tasks, are also desirable. First-order logic provides a single comprehensive and powerful framework for supporting all of the above. This paper presents a First-Order Logic incremental method for inferring process models, and show its application to the user's daily routines, for predicting his needs and comparing the actual situation with the expected one. Promising results have been obtained with both controlled experiments that proved its efficiency and eectiveness, and with a domain-specic dataset.
Pedagogical Conversational Agents (PCAs) have the advantage of offering to students not only task-oriented support but also the possibility to interact with the computer media at a social level. This form of intelligence is particularly important when the character is employed in an educational setting. This paper reports our initial results on the recognition of users' social response to a pedagogical agent from the linguistic, acoustic and gestural analysis of the student communicative act.
Ambient Intelligence systems require a natural and personalized experience in interacting with services provided by the environment. In this view, the interaction may happen either in a pervasive way, through a combination of devices embedded in the environment, or using a conversational interface acting as an environment concierge. In the latter case, the interface can be embodied in a conversational agent able to involve users in a human-like conversation and to establish a social relation with them. Developing such an Ambient Conversational System (ACS) requires a model of the user that considers not only the cognitive ingredients of his mental state, but also extra-rational factors such as affect, engagement, attitudes. This paper describes a multimodal framework for recognizing the social attitude of users during the interaction with an embodied agent in a smart environment. In particular, we started from the analysis and annotation of advisory dialogs between humans and then we used the annotated corpus to build a framework for recognizing the social attitude in multimodal dialogs with an ACS. Results of the study show an acceptable performance of the framework in recognizing and monitoring the social attitude during the dialog with an ACS. We also compared results of the analysis of human-human interactions with respect to the human-ACS interaction and, even if the level of initiative of subjects during the dialog was lower in this latter modality, the difference in the average number of social moves was not significant, thus showing that subjects probably were keen to establish a social relation with the conversational agent.
Ambient Intelligence aims at promoting an effective, natural and personalized interaction with the environment services. In order to provide the most appropriate answer to the user requests, an Ambient Intelligence system should model the user by considering not only the cognitive ingredients of his mental state, but also extra-rational factors such as affect, engagement, attitude, and so on. This paper describes a study aimed at building a multimodal framework for recognizing the social response of users during interaction with embodied agents in the context of ambient intelligence. In particular, we describe how we extended a model for recognizing the social attitude in text-based dialogs by adding two additional knowledge sources: speech and gestures. Results of the study show that these additional knowledge sources may help in improving the recognition of the users' attitude during interaction.
Pedagogical Conversational Agents (PCAs) have the advantage of offering to students not only task-oriented support but also the possibility to interact with the computer media at a social level. This form of intelligence is particularly important when the character is employed in an educational setting. This paper reports our initial results on the recognition of users' social response to a pedagogical agent from the linguistic, acoustic and gestural analysis of the student communicative act.
When used as an interface in the context of Ambient Assisted Living (AAL), a social robot should not just provide a task-oriented support. It should also try to establish a social empathic relation with the user. To this aim, it is crucial to endow the robot with the capability of recognizing the user’s affective state and reason on it for triggering the most appropriate communicative behavior. In this paper we describe how such an affective reasoning has been implemented in the NAO robot for simulating empathic behaviors in the context of AAL. In particular, the robot is able to recognize the emotion of the user by analyzing communicative signals extracted from speech and facial expressions. The recognized emotion allows triggering the robot’s affective state and, consequently, the most appropriate empathic behavior. The robot’s empathic behaviors have been evaluated both by experts in communication and through a user study aimed at assessing the perception and interpretation of empathy by elderly users. Results are quite satisfactory and encourage us to further extend the social and affective capabilities of the robot.
Conversational agents have been widely used in pedagogical contexts. They have the advantage of offering to users not only a task-oriented support, but also the possibility to relate with the system at social level. Therefore, besides endowing the conversational agent with knowledge necessary to fulfill pedagogical goals, it is important to provide the agent with social intelligence. To do so the agent should be able to recognize the social attitude of the user during the interaction in order to accommodate the conversational strategy. In this paper we illustrate how we defined and applied a model for recognizing the social attitude of the student in natural interaction with a Pedagogical Conversational Agent (PCA) starting from the linguistic, acoustic and gestural analysis of the communicative act.
The society model of the last years has given a key role to knowledge in terms of economic and social development. On this background, is allocated the success of the communities of practice, and more recently, of the Complex Learning Community model, whose strength is represented by the ability to provide students, educators, and professionals with a common creative space to develop not only knowledge and expertise, but also ideas, synergies, chances. In both cases, particular emphasis is given to the interactions between users as a "place" where knowledge emerges, is built and delivered. Our former paper was dedicated to the communities of practice, and in particular, to the use of an intelligent agent to control and lead the interaction among users; now our research concerns environments for collaborative learning, where the role of the animator is fundamental, but we take into account a blended animator, supported by specific tools to manage the community. Such tools are necessarily connected to the comprehension and elaboration of the natural language which highlights the appropriate knowledge elements to be used. In this paper we intend to analyse the available techniques in order to identify those that are more suitable for designing the supporting tools required by the blended animator. Particular attention will be given to the domain ontology.
The problem of implementing socially intelligent agents has been widely investigated in the field of both Embodied Conversational Agents (ECAs) and Social Robots that have the advantage of offering to people the possibility to relate with computer media at a social level. We focus our study on the recognition of the social response of users to embodied agents in the context of ambient intelligence. In this paper we describe how we extended a model for recognizing the social attitude in natural conversation from text by adding two additional knowledge sources: speech and gestures.
Ambient Intelligence solutions may provide a great opportunity, for elder people, to live longer at home. When assistance and care are delegated to the intelligence embedded in the environment, besides considering task-oriented response to the user needs, it is necessary to take into account the establishment of social relations. To this aim, it becomes crucial to model both the rational and the affective components of the user state of mind. In this chapter we will mainly focus on the problem of modeling the cognitive and affective variables involved in the definition of a user model suitable for this domain. After provid-ing an overlook of the state of the art, we report about our experience in designing NICA (as the name of the project Natural Interaction with a Caring Agent), a social agent acting as a virtual caregiver able to assist elderly people in a smart environment for taking care of both the physical and mental state of the users.
Condividi questo sito sui social