Please use this identifier to cite or link to this item:
Title: An approach for full scale off-line testing to evaluate the iWIDGET system performance
Other Titles: Proceedings of the 36th IAHR World Congress
Authors: Vieira, P.
Barateiro, J.
Loureiro, D.
Coelho, J.
Mamade, A.
Keywords: Off-line testing;Smart meter;Test case;Water use efficiency
Issue Date: Jun-2015
Publisher: IAHR
Abstract: WIDGET is an ongoing European Commission FP7 project aiming at improved water efficiencies using novel ICT technologies for integrated supply-demand side management, focusing on an integrated approach to water resources management. The project contributes to advance knowledge about smart metering in order to develop novel, robust and cost-effective ICT tools for both water utilities and consumers. Within the project, a set of relevant applications derived from smart metering real-time data was identified, characterized and implemented in a software prototype. This prototype is a critical asset that must be evaluated to ensure its compliance with the requirements specified during system analysis and design and to verify its acceptance according to the end users’ needs. The prototype evaluation followed the software engineering best practices, through a set of software tests with close collaboration between consumers, utility stakeholders and software developers. Once individual components (data management and analytical) have passed unit testing, integrated tests took place. The first phase of integrated tests used off-line historical smart metering data as opposed to the second phase (on-line testing) that uses near real-time data. This paper presents the standardized method designed in the project to carry out the off-line testing. The off-line testing method is based on a test scenario/test case approach and includes functional (i.e., tests to verify if functional requirements are met) and non-functional testing (i.e., tests to verify the quality of the software in terms of usability, security or compatibility, for example). For each evaluated application, test scenarios were designed as well as the corresponding test cases (in total, more than 50 test scenarios and 90 test cases). Success criteria for determining whether an observed behaviour of the system is correct and key performance indicators (KPI) to assess the achievement of success criteria were also defined for each test case step. This paper also presents results of the method´s application, which included collecting historical data from a full-scale case study, feeding this data to the prototype, analysing the results and evaluating KPI in order to identify corrections and improvements.
Appears in Collections:DHA/NES - Comunicações a congressos e artigos de revista

Files in This Item:
File Description SizeFormat 
CIP22.pdfDocumento principal899.66 kBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.