Effettua una ricerca
Danilo Caivano
Ruolo
Ricercatore
Organizzazione
Università degli Studi di Bari Aldo Moro
Dipartimento
DIPARTIMENTO DI INFORMATICA
Area Scientifica
AREA 01 - Scienze matematiche e informatiche
Settore Scientifico Disciplinare
INF/01 - Informatica
Settore ERC 1° livello
Non Disponibile
Settore ERC 2° livello
Non Disponibile
Settore ERC 3° livello
Non Disponibile
The adoption of Open Source in industrial applications has increased in the last years. In this context the need to provide answers to high levels of Maintenance arises. Therefore it is critical to select Open Sources components to be integrated in a software system according to their Maintenance characteristics. The work presents a Metric Model and its related Decision Model for OS Governance and in particular for selecting OSs according to their Maintenance Level. The Metric Model was obtained individuating some automatically calculable measures from a group of projects available on the Web. The measures were validated on several OSs used in industrial projects. The results are of interest and encourage future research.
Globalization, is pushing companies towards continuous improvement. Quality frameworks addressing SPI practices are classifiable in ones describing: "what" should be done (ISO9001,CMMI); "how" it should be done (Six Sigma, GQM). When organizations adopt improvement initiatives, many models may be implied, each leveraging best practices for addressing improvement challenges. This may generate confusion, extra effort and cost, as well as increase the risk of inefficiencies and redundancies. So, it is important to harmonize quality frameworks, i.e. identify intersections and overlapping parts and create a multi-model improvement solution. Our aim is to propose a Harmonization Process supporting organizations interested in introducing/improving SPI practices. We present: a what/what combination of ISO9001 and CMMI-DEVv.1.2 models in the direction from ISO-CMMI; and detail the what/how perspective by showing how GQM is used to define operational goals that address ISO9001 statements, reusable in CMMI appraisals. The harmonization process has been applied to a SME certified ISO9001:2000.
Abstract Context Although various success stories of model-based approaches are reported in literature, there is still a significant resistance to model-based development in many software organizations because the UML is perceived to be expensive and not necessarily cost-effective. It is also important to gather empirical evidence in which context and under which conditions the UML makes or does not make a practical difference. Objective Our objective is to provide empirical evidence as to which UML diagrams are more helpful during software maintenance: Forward Designed (FD) UML diagrams or Reverse Engineered (RE) UML diagrams. Method We carried out a family of experiments which consisted of one experiment and two replications with a total of 169 Computer Science undergraduate students. Results The individual data analysis and the meta-analysis conducted on the whole family, show a tendency in favor of FD diagrams and are significantly different as regards the effectiveness and efficiency of the subjects who participated and played the role of maintainers. The analysis of the qualitative data, collected using a post-experiment survey, reveals that the subjects did not consider RE diagrams helpful. Conclusions Our findings show that there are some objective results (descriptive statistics or statistical tests) related to the maintenance effectiveness and efficiency in favor of the use of FD UML diagrams during software maintenance. Subjective opinions also lead us to recommend the use of UML diagrams (especially class diagrams) created during the design phase for software maintenance because they improve the understanding of the system in comparison with RE diagrams. Nevertheless, we can only assume that these results are valid in the context of Computer Science undergraduate students when working with small systems related to well-known domains, and other contexts should be explored in order to reaffirm the results in an industrial context by carrying out replications with professionals.
Context: The conventional wisdom states that stereotypes are used to clarify or extend the meaning of model elements and consequently should be helpful in comprehending the diagram semantics. Objective: The main goal of this work is to present a family of experiments that we have carried out to investigate whether the use of stereotypes improves the comprehension of UML sequence diagrams. Method: The family of experiments consists of an experiment and two replications carried out with 78, 29 and 36 undergraduate Computer Science students, respectively. The comprehension of UML sequence diagrams with and without stereotypes was analyzed from three different perspectives borrowed from the Cognitive Theory of Multimedia Learning (CTML): semantic comprehension, retention and transfer. In addition, we carried out a meta-analysis study to integrate the different data samples. Results: The statistical analysis and meta-analysis of the data obtained from each experiment separately indicates that the use of the proposed stereotypes helps improving the comprehension of the diagrams, especially when the subjects are not familiar with the domain. Conclusions: The set of stereotypes presented in this work seem to be helpful for a better comprehension of UML sequence diagrams, especially with not well-known domains. Although further research is necessary for strengthening these results, introducing these stereotypes both in academia and industry could be an interesting practice for checking the validity of the results.
Anemia is diagnosed by measuring the blood concentration of hemoglobin (Hb). In the literature, many studies have aimed to diagnose anemia with non-invasive methods, for example, estimating the pallor of the conjunctiva by means of digital images. In this way, this paper aims to identify a procedure for the automatic segmentation and optimization of conjunctiva sections. Therefore, image analysis algorithms have been applied to optimize the area of interest in terms of correlation with the estimated Hb value by blood sampling. Optimization was also possible through the study of the influence of image brightness on the correct Hb estimation by means of digital images of the conjunctiva. In conclusion, interesting experimental results were reported.
Context: Software development time has been reduced with new development tools and paradigms, testing
Cloud computing is becoming more and more adopted as infrastructure for providing service oriented solutions. Such a solution is especially critical when software and hardware resources are remotely distributed. In this paper we illustrate our experience in designing the architecture of a community cloud infrastructure in an industrial project related to integrated logistics (LOGIN) for made in Italy brand products. The cloud infrastructure has been designed with particular attention towards aspects such as virtualization, server consolidation and business continuity. Copyright © 2015 SCITEPRESS - Science and Technology Publications.
A major challenge faced by organizations is to better capture business strategies into products and services at an ever-increasing pace as the business environment constantly evolves. We propose a novel methodology base on a Business Process Line (BPL) engineering approach to inject flexibility into process modeling phase and promote reuse and flexibility by selection. Moreover we suggest a decision-table (DT) formalism for eliciting, tracking and managing the relationships among business needs, environmental changes and process tasks. In a real case study we practiced the proposed methodology by leveraging the synergy of feature models, variability mechanisms and decision tables. The application of DT-based BPL engineering approach proves that the Business Process Line benefits from fundamental concepts like composition, reusability and adaptability and satisfies the requirements for process definition flexibility
Legacy enterprise systems mainly consist of two kinds of artefacts: source code and databases. Typically, the maintenance of those artefacts is carried out through re-engineering processes in isolated manners. However, for a more effective maintenance of the whole system both should be analysed and evolved jointly according to ADM (Architecture-Driven Modernization) approach. Thus, the ROI and the lifespan of the legacy system are expected to improve. In this sense, this paper proposes the schema elicitation technique for recovering the relational database schema that is minimally used by the source code. For this purpose, the technique analyses database queries embedded in the legacy source code in order to remove the dead parts of the database schema. Also, this proposal has been validated throughout a real-life case study.
Today's organizations are increasingly pushed to be distributed by space, time and capabilities and are involved to leverage synergies by integrating their business processes in order to produce new value-added products and services. Here the importance of integrating whole processes rather than simply integrate databases or software applications. Seeing the duality between products and processes, we propose to exploit flexibility provided by the product-line engineering approach for modeling business processes as a Business Process Line (BPL) in order to capture process variability, promote reuse and integration and provide the capacity to anticipate process changes. To support process evolution and consistency, we suggest the use of decision tables to elicit, track and manage all the emerging decision points during business process modeling, with the purpose of maintaining the relationships among business needs, environmental changes and process tasks. In a real case study we practiced the proposed methodology by leveraging the synergy of feature models, variability mechanisms and decision tables. The results prove that the BPL satisfies the requirements for business process flexibility.
Many technological solutions, especially in the fields of computer science and software engineering, are poorly supported by empirical evidences of their effectiveness and by the experience of acquiring the application in different industrial contexts. The lack of empirical evidences makes managers less confident in applying technological solutions proposed by the research community. Moreover, the lack of experience in the acquisition of technological solutions in different industrial contexts makes acquisition of the technological solution highly risky. These two issues represent a barrier to the diffusion of innovative technological solutions. This paper presents a Knowledge Management System (KMS), called PROMETHEUS, which consists of a platform that manages the Knowledge Experience Base (KEB), which collects Knowledge Experience Packages (KEP). The KMS thus formed supports the formalization and packaging of knowledge and experience of producers and innovation transferors encouraging gradual elicitation of tacit information of bearers of knowledge to facilitate the transfer. The KMS enables the cooperative production and evolution of KEP between different authors and users
Mutation is a testing technique that, after many years of application in the academic and research environments, has recently started to be applied in industry. The main obstacle for its industrial adoption has been the high costs associated to its three stages: mutant generation, execution of tests cases against mutants and result analysis. In the same way, the techniques that researchers have developed to alleviate these costs are the main reason for its acceptation. In spite of this, the application of mutation is reduced to the testing of the internal layers of systems, and not of the external ones, such as the GUI. Since current trends in software construction mainly involve the development of web and mobile applications, we have extended the Bacterio tool for web application testing using mutation. This paper deals with the integration of the mutant schema technique in Bacterio as a way to efficiently execute mutation testing of web applications. Moreover, a new component has been included to control the execution of the test cases within the web server.
Business process refactoring techniques have been often provided for business process manually modeled. Unfortunately, no many refactoring techniques lie in reversing business process models obtained from existing information systems, which need, even more, to be refactored. Hence, there is no strong empirical evidence on how the understandability of business process models is affected by this kind of refactoring techniques. This paper is aimed at providing a case study with two real-life information systems, from which 40 business process models were obtained by reverse engineering. The empirical study attempts to quantify the effect to the understandability of the order of refactoring operators as well as the previous refactoring actions. The main implication of the obtained results are a set of rules that may be used to optimize the understandability by means of the prioritization and configuration of refactoring techniques specially developed for business process models retrieved by reverse engineering.
Integration of human-centered design in a company’s software development requires a thorough analysis of its current practices by both researchers and practitioners.
The efforts of addressing user experience (UX) in product development keep growing, as demonstrated by the proliferation of workshops and conferences bringing together academics and practitioners, who aim at creating interactive software able to satisfy their users. This special issue focuses on "Interplay between User Experience Evaluation and Software Development", stating that the gap between human-computer interaction and software engineering with regard to usability has somewhat been narrowed. Unfortunately, our experience shows that software development organizations perform few usability engineering activities or none at all. Several authors acknowledge that, in order to understand the reasons of the limited impact of usability engineering and UX methods, and to try to modify this situation, it is fundamental to thoroughly analyze current software development practices, involving practitioners and possibly working from inside the companies. This article contributes to this research line by reporting an experimental study conducted with software companies. The study has confirmed that still too many companies either neglect usability and UX, or do not properly consider them. Interesting problems emerged. This article gives suggestions on how they may be properly addressed, since their solution is the starting point for reducing the gap between research and practice of usability and UX. It also provides further evidence on the value of the research method, called Cooperative Method Development, based on the collaboration of researchers and practitioners in carrying out empirical research; it has been used in a step of the performed study and has revealed to be instrumental for showing practitioners why to improve their development processes and how to do so.
The CMMI-ACQ and the ISO/IEC 12207:2008 are process reference models that address issues related to the best practices for software product acquisition. With the aim of offering information on how the practices described in these two models are related, and considering that the mapping is one specific strategy for the harmonization of models, we have carried out a mapping of these two reference models for acquisition. We have taken into account the latest versions of the models. Furthermore, to carry out this mapping in a systematic way, we defined a process for this purpose. We consider that the mapping presented in this paper supports the understanding and leveraging of the properties of these reference models, which is the first step towards harmonization of improvement technologies. Furthermore, since a great number of organizations are currently acquiring products and services from suppliers and developing fewer and fewer of these products in-house, this work intends to support organizations which are interested in introducing or improving their practices for acquisition of products and services using these models.
This article describes an approach for test case generation in Software Product Lines, using Model Driven. Our proposal defines a set of metamodels, models and algorithms, all of them organized and managed in a 5-step process, which are implemented in a tool specifically developed for this goal, Pralíntool.
Model-driven Testing (MDT) refers a model-based testing that follows Model Driven Engineering paradigm, i.e., the test cases are automated generated using models extracted from software artifacts through model transformations. In previous work, we developed a model to model transformation that takes as input UML 2.0 sequence diagrams, and automatically derive test cases scenarios that conforms the UML Testing Profile. In this work, these test case scenarios are automatically transformed using model to text transformation. This transformation, which can be applied to obtain test cases in a variety of programming languages, is implemented with MOFScript, which is also an OMG standard
In MDE, software products are built with successive transformations of models at different abstraction levels, which in the end are translated into executable code for the specific platform where the system will be deployed and executed. As testing is one of the essential activities in software development, researchers have proposed several techniques to deal with testing in model-based contexts. In previous works, we described a framework to automatically derive UML Testing-Profile test cases from UML 2.0 design models. These transformations are made with the QVT language which, like UML 2.0 and UML-TP, is an OMG standard. Now, we have extended the framework for deriving the source code of the test cases from those in the UML Testing Profile. This transformation, which can be applied to obtain test cases in a variety of programming languages, is implemented with MOFScript, which is also an OMG standard. Thus, this paper almost closes our cycle of testing automation in MDE environments, always within the limits of OMG standards. Moreover, thanks to this standardization, the development of new tools is not required.
Metodo di autocontrollo integrato con strumenti hardware e software per la raccolta ed analisi sincrona, l’antifalsificazione e la verifica proattiva in mobilità ed in accordo a diversificati protocolli diagnostico-terapeutici, di parametri fisiologici utili al monitoraggio della sindrome metabolica.
Reverse engineering of business process enables business process to be discovered and retrieved from existing information systems, which embed many business rules that are not available anywhere else. These techniques are especially useful when business process models are unavailable, outdated, or misaligned because of uncontrolled maintenance. Reverse engineering techniques obtain well-designed business processes, but these are often retrieved with harmful quality faults as a consequence of the abstraction. Clustering techniques are then applied to reduce these quality faults and improve the understandability and modifiability of business process models. Regrettably, the most challenging concern is how to determine the similarity between two business activities to be clustered. Formal ontologies help to represent the essential concepts and constraints of a universe of discourse and determine the similarity in accordance with the given ontology. This paper shows how to compute and use the ontology-based similarity within a clustering algorithm whose aim is to improve the quality of business process models previously obtained from legacy information systems by reverse engineering. The principal contribution of this paper is the usage of an ontology-based similarity function and its application to 43 business process models retrieved from four real-life information systems
Definition The word “Rapid e-Learning” appeared for the first time in “2004 in a research of Bersin and Associates with the title “Rapid E-Learning: What Works. Market Tools and Techniques and Best Practices for Building E-Learning Programs in Weeks” (Bersin and Associates 2004). Bersin and Associates in a later work defined rapid e-learning as follows: “It is Generally defined as Web-based training that can be created in weeks and is Typically authored by subject-matter experts (SMEs)” (Bersin 2005). A wider definition would be “the word rapid e-learning is a whole of methods, tools, and technologies to build e-learning courses and learning objects quickly.”
This article describes a model-driven approach for test case generation in software product lines. It defines a set of metamodels and models, a 5-step process and a tool called Pralíntool that automates the process execution and supports product line engineers in using the approach.
In this paper we propose the Electronic Multimedia Health Fascicle (EMHF), a truly new software system for the very large number of available electronic health records. It allows the physician to see at a glance the patient's clinical biometric measurements and biologic parameters, so as to be able to link any alarming physical status to his recent medical history. Web based, accessible from any mobile device, and easy to use by both physicians and patients, the system facilitates patient-medical interaction. Using the system can also promote better adherence to medical guidelines by the physicians and to medical prescriptions and advice by the patients.
All projects involve risk; a zero risk project is not worth pursuing. Furthermore, due to software project uniqueness, uncertainty about final results will always accompany software development. While risks cannot be removed from software development, software engineers instead, should learn to manage them better (Arshad et al., 2009; Batista Webster et al., 2005; Gilliam, 2004). Risk Management and Planning requires organization experience, as it is strongly centred in both experience and knowledge acquired in former projects. The larger experience of the project manager improves his ability in identifying risks, estimating their occurrence likelihood and impact, and defining appropriate risk response plan. Thus risk knowledge cannot remain in an individual dimension, rather it must be made available for the organization that needs it to learn and enhance its performances in facing risks. If this does not occur, project managers can inadvertently repeat past mistakes simply because they do not know or do not remember the mitigation actions successfully applied in the past or they are unable to foresee the risks caused by certain project restrictions and characteristics. Risk knowledge has to be packaged and stored over time throughout project execution for future reuse. Risk management methodologies are usually based on the use of questionnaires for risk identification and templates for investigating critical issues. Such artefacts are not often related each other and thus usually there is no documented cause-effect relation between issues, risks and mitigation actions. Furthermore today methodologies do not explicitly take in to account the need to collect experience systematically in order to reuse it in future projects. To convey these problems, this work proposes a framework based on the Experience Factory Organization (EFO) model (Basili et al., 1994; Basili et al., 2007; Schneider & Hunnius, 2003) and then use of Quality Improvement Paradigm (QIP) (Basili, 1989). The framework is also specialized within one of the largest firms of current Italian Software Market. For privacy reasons, and from here on, we will refer to it as “FIRM”. Finally in order to quantitatively evaluate the proposal, two empirical investigations were carried out: a post-mortem analysis and a case study. Both empirical investigations were carried out in the FIRM context and involve legacy systems transformation projects. The first empirical investigation involved 7 already executed projects while the second one 5 in itinere projects. The research questions we ask are: Does the proposed knowledge based framework lead to a more effective risk management than the one obtained without using it? Does the proposed knowledge based framework lead to a more precise risk management than the one obtained without using it? The rest of the paper is organized as follows: section 2 provides a brief overview of the main research activities presented in literature dealing with the same topics; section 3 presents the proposed framework, while section 4 its specialization in the FIRM context; section 5 describes empirical studies we executed, results and discussions are presented in section 6. Finally, conclusions are drawn in section 7.
According to the Project Management Institute (PMI) project management consists of planning, organizing, motivating and controlling resources such as time and cost in order to produce products with acceptable quality levels. As so, project managers must monitor and control project execution, i.e. verify actual progress and performance of a project with respect to the project plan and timely identify where changes must be made on both process and product. Earned Value Management (EVM) is a valuable technique for determining and monitoring the progress of a project as it indicates performance variances based on measures related to work progress, schedule and cost information. This technique requires that a set of metrics be systematically collected throughout the entire project. A consequence is that, for large and long projects, managers may encounter difficulties in interpreting all the information collected and using it for decision-making. To assist managers in this tedious task, in this paper we classify the EVM metrics distinguishing them into five conceptual classes and present an interpretation model that managers can adopt as checklist for monitoring EVM values and tracking the project's progress. At this point of our research the decision model has been applied during an industrial project to monitor project progress and guide project manager decisions. Copyright © 2015 SCITEPRESS - Science and Technology Publications.
The importance of usability engineering in software development is acknowledged by an increasing number of software organizations. This paper reports from a survey of the practical impact of usability engineering in software development organizations. The survey was conducted in Southern Italy, replicating one conducted in Northern Denmark three years earlier. The results show that the number of organizations conducting some form of usability activities is nearly the same, but there are important differences in the understanding of usability. The key advantages emphasized by the respondents are product quality, user satisfaction and competitiveness in both surveys. The main problems emphasized are developer mindset, resource demands and customer participation.
Condividi questo sito sui social