This paper describes an extended Kalman filter based algorithm for fusion of monocular vision measurements, inertial rate sensor measurements, and camera motion. The motion of the camera between successive images generates a baseline for range computations by triangulation. The recursive estimation algorithm is based on extended Kalman filtering. The depth estimation accuracy is strongly affected by mutual observer and feature point geometry, measurement accuracy of observer motion parameters and line of sight to a feature point. The simulation study investigates how the estimation accuracy is affected by the following parameters: linear and angular velocity measurement errors, camera noise, and observer path. These results draw requirements to the instrumentation and observation scenarios. It was found that under favorable conditions the error in distance estimation does not exceed 2% of the distance to a feature point.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This article investigates the achievable rates of a bidirectional full-duplex radio link between a base station and a mobile user in a cellular network. In particular, we analyze the relationship between accurate self-interference channel estimation, which is required for effective digital interference cancellation, and spectral-efficient simultaneous two-way data transmission and reception, which is the objective for developing the full-duplex technology in the first place. Channel estimation and data transmission are inherently coupled due to a trade-off arising from the facts that the former benefits from half-duplex slots during which there is no distortion from the data signal of interest while the latter needs full-duplex slots for approaching the anticipated ideal-case doubled spectral efficiency in comparison to plain half-duplex operation. The analysis is conducted by deriving expressions for the achievable data rates and calculating the corresponding rate regions with the help of realistic waveform simulations for incorporating transceiver hardware impairments, which render residual self-interference despite effective cancellation. The findings indicate that increased flexibility in the form of half-duplex communication periods of adjustable lengths results in an increased overall throughput. Thereby, hybrid half/full-duplex operation is not only useful for improving the performance of digital self-interference cancellation but also for supporting varying unbalanced downlink-uplink traffic ratios.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
With an increasing number of service providers in the cloud market, the competition between these is also increasing. Each provider attempts to attract customers by providing a high quality service with lowest possible cost and at the same time trying to make profit. Often, cloud resources are advertised and brokered in a spot market style, i.e., traded for immediate delivery. This paper proposes an architecture for a brokerage model specifically for multi-cloud resource spot markets that integrates the resource brokerage function across several cloud providers. We use a tuple space architecture to facilitate coordination. This architecture supports specifically multiple cloud providers selling unused resources in the spot market. To support the matching process by finding the best match between customer requirements and providers, offers are matched with regard the lowest possible cost available for the customer in the market at the time of the request. The key role of this architecture is to provide the coordination techniques built on a tuple space, adapted to the cloud spot market.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Today's dominant design for the Internet of Things (IoT) is a Cloud-based system, where devices transfer their data to a back-end and in return receive instructions on how to act. This view is challenged when delays caused by communication with the back-end become an obstacle for IoT applications with, for example, stringent timing constraints. In contrast, Fog Computing approaches, where devices communicate and orchestrate their operations collectively and closer to the origin of data, lack adequate tools for programming secure interactions between humans and their proximate devices at the network edge. This paper fills the gap by applying Action-Oriented Programming (AcOP) model for this task. While originally the AcOP model was proposed for Cloud-based infrastructures, presently it is re-designed around the notion of coalescence and disintegration, which enable the devices to collectively and autonomously execute their operations in the Fog by serving humans in a peer-to-peer fashion. The Cloud's role has been minimized—it is being leveraged as a development and deployment platform.
EXT="Mäkitalo, Niko"
EXT="Mikkonen, Tommi"
Research output: Contribution to journal › Article › Scientific › peer-review
This paper proposes, for the first time without using any linearization or order reduction, an adaptive and model-based discharge pressure control design for the variable displacement axial piston pumps (VDAPPs), whose dynamical behaviors are highly nonlinear and can be described by a fourth-order differential equation. The rigorous stability proof, with an asymptotic convergence, is given for the entire system. In the proposed novel controller design method, the specifically designed stabilizing terms constitute an essential core to cancel out all the stability-preventing terms. The experimental results reveal that rapid parameter adaptation significantly improves the feedback signal tracking precision compared to a known-parameter controller design. In the comparative experiments, the adaptive controller design demonstrates the state-of-the-art discharge pressure control performance, enabling a possibility for energy consumption reductions in hydraulic systems driven with VDAPP.
Research output: Contribution to journal › Article › Scientific › peer-review
Context: several companies, particularly Small and Medium Sized Enterprises (SMEs), often face software maintenance issues due to the lack of Software Quality Assurance (SQA). SQA is a complex task that requires a lot of effort and expertise, often not available in SMEs. Several SQA models, including maintenance prediction models, have been defined in research papers. However, these models are commonly defined as "one-size-fits-All" and are mainly targeted at the big industry, which can afford software quality experts who undertake the data interpretation tasks. Objective: in this work, we propose an approach to continuously monitor the software operated by end users, automatically collecting issues and recommending possible fixes to developers. The continuous exception monitoring system will also serve as knowledge base to suggest a set of quality practices to avoid (re)introducing bugs into the code. Method: first, we identify a set of SQA practices applicable to SMEs, based on the main constraints of these. Then, we identify a set of prediction techniques, including regressions and machine learning, keeping track of bugs and exceptions raised by the released software. Finally, we provide each company with a tailored SQA model, automatically obtained from companies' bug/issue history. Developers are then provided with the quality models through a set of plug-ins for integrated development environments. These suggest a set of SQA actions that should be undertaken, in order to maintain a certain quality level and allowing to remove the most severe issues with the lowest possible effort. Conclusion: The collected measures will be made available as public dataset, so that researchers can also benefit of the project's results. This work is developed in collaboration with local SMEs and existing Open Source projects and communities.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
High-level synthesis tools aim to produce hardware designs out of software descriptions with a goal to lower the bar in FPGA usage for software engineers. Despite their recent progress, however, HLS tools still require FPGA target specific pragmas and other modifications to the originally processor-targeting source code descriptions. Customized soft core based overlay architectures provide a software programmable layer on top of the FPGA fabric. The benefit of this approach is that a platform independent compiler target is presented to the programs, which lowers the porting burden, and online repurposing the same configuration is natural by just switching the executed program. The main drawback, like with any overlay architecture, are the additional implementation overheads the overlay imposes to the resource consumption and the maximum operating frequency. In this paper we show how by utilizing the efficient structure of Transport-Triggered Architectures (TTA), soft-cores can be customized automatically to benefit from the flexible FPGA fabric while still presenting a comfortable software layer to the users. The results compared to previously published non-specialized TTA soft cores indicate equal or better execution times, while the program image size is reduced by up to 49%, and overall resource utilization improved from 10% to 60%.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Real-life target recognition often requires appropriate processing of unknown targets. Such targets are the ones that the automatic target recognition system has not been trained to identify. These targets may, however, be interesting whereupon they should be further analyzed. In this paper, we propose a novel framework for analyzing radar measurements of unknown targets in order to incorporate them into a hierarchical target class taxonomy for the target recognition. Besides the preliminary information, a vital part in the analysis of the radar measurement is the comparison between the measured signature and the signatures of the known target types and categories. We use the results of such analysis to indicate potential spots in the class taxonomy where to add the unknown target. The framework allows identification of unknown target types that have been previously observed, when they are encountered again. We demonstrate the proposed framework through an experiment using the real data of a multi-radar system. In the experiments, we show the feasibility of our approach by examining target recognition in two cases: using our framework and without it. We find that the proposed framework enables enhanced processing of unknown targets in radar target recognition.
INT=comp,"Kauhanen, Mikko"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3x and 1.8x speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.
Research output: Contribution to journal › Article › Scientific › peer-review
ALMARVI is a collaborative European research project funded by Artemis involving 16 industrial as well as academic partners across 4 countries, working together to address various computational challenges in image and video processing in 3 application domains: healthcare, surveillance and mobile. This paper is an editorial for a special issue discussing the integrated system created by the partners to serve as a cross-domain solution for the project. The paper also introduces the partner articles published in this special issue to discuss the various technological developments achieved within ALMARVI spanning all system layers, from hardware to applications. We illustrate the challenges faced within the project based on use cases from the three targeted application domains, and how these can address the 4 main project objectives addressing 4 challenges faced by high performance image and video processing systems: massive data rate, low power consumption, composability and robustness. We present a system stack composed of algorithms, design frameworks and platforms as a solution to these challenges. Finally, the use cases from the three different application domains are mapped on the system stack solution and are evaluated based on their performance for each of the 4 ALMARVI objectives.
Research output: Contribution to journal › Article › Scientific › peer-review
In recent years, there has been an increasing adoption of various Learning Management Systems (LMS) in higher education in Sub-Saharan countries. Despite the perceived benefits of these systems to leverage challenges facing education sector in the region, studies show that the majority of them tend to fail; partially or totally. This paper presents a model for evaluating LMS deployed in Higher Education Institutions in Sub-Saharan countries through adopting and extending the updated DeLone and McLean information system success model. The proposed model and the instrument have been validated through a survey of 200 students enrolled in various courses offered via Moodle LMS at University of Dar es Salaam, Tanzania. The findings of this study will help those who are involved in the implementation of LMS in higher education in Sub-Saharan countries to evaluate their existing systems and/or to prepare corrective measures and strategies to avoid future LMS failures.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper presents an analysis of an efficient parallel implementation of the active-set Newton algorithm (ASNA), which is used to estimate the nonnegative weights of linear combinations of the atoms in a large-scale dictionary to approximate an observation vector by minimizing the Kullback–Leibler divergence between the observation vector and the approximation. The performance of ASNA has been proved in previous works against other state-of-the-art methods. The implementations analysed in this paper have been developed in C, using parallel programming techniques to obtain a better performance in multicore architectures than the original MATLAB implementation. Also a hardware analysis is performed to check the influence of CPU frequency and number of CPU cores in the different implementations proposed. The new implementations allow ASNA algorithm to tackle real-time problems due to the execution time reduction obtained.
Research output: Contribution to journal › Article › Scientific › peer-review
Systems that record students' programming process have become increasingly popular during the last decade. The granularity of stored data varies across these systems and ranges from storing the final state, e.g. a solution, to storing fine-grained event streams, e.g. every key-press made while working on a task. Researchers that study such data make assumptions based on the granularity. If no fine-grained data exists, the baseline assumption is that a student proceeds in a linear fashion from one recorded state to the next. In this work, we analyze three different granularities of data; (1) submissions, (2) snapshots (i.e. save, compile, run, test events), and (3) keystroke-events. Our study provides insight on the quantity of lost data when storing data at a specific granularity and shows how the lost data varies depending on previous programming experience and the programming assignment type.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Dynamic simulation of distance to the physical surface could promote the development of new inexpensive tools for blind and visually impaired users. The StickGrip is a haptic device comprised of the Wacom pen input device added with a motorized penholder. The goal of the research presented in this paper was to assess the accuracy and usefulness of the new pen-based interaction technique when the position and displacement of the penholder in relation to the pen tip provided haptic feedback to the user about the distance to the physical or virtual surface of interaction. The aim was to examine how accurately people are able (1) to align the randomly deformed virtual surfaces to the flat surface and (2) to adjust the number of surface samples having a randomly assigned curvature to the template having the given curvature and kept fixed. These questions were approached by measuring both the values of the adjusted parameters and the parameters of the human performance, such as a ratio between inspection time and control time spent by the participants to complete the matching task with the use of the StickGrip device. The test of the pen-based interaction technique was conducted in the absence of visual feedback when the subject could rely on the proprioception and kinesthetic sense. The results are expected to be useful for alternative visualization and interaction with complex topographic and mathematical surfaces, artwork, and modeling.
Research output: Contribution to journal › Article › Scientific › peer-review
Several major advances in Cell and Molecular Biology have been made possible by recent advances in livecell microscopy imaging. To support these efforts, automated image analysis methods such as cell segmentation and tracking during a time-series analysis are needed. To this aim, one important step is the validation of such image processing methods. Ideally, the "ground truth" should be known, which is possible only by manually labelling images or in artificially produced images. To simulate artificial images, we have developed a platform for simulating biologically inspired objects, which generates bodies with various morphologies and kinetics and, that can aggregate to form clusters. Using this platform, we tested and compared four tracking algorithms: Simple Nearest-Neighbour (NN), NN with Morphology and two DBSCAN-based methods. We show that Simple NN works well for small object velocities, while the others perform better on higher velocities and when clustering occurs. Our new platform for generating new benchmark images to test image analysis algorithms is openly available at (http://griduni.uninova.pt/Clustergen/ClusterGen-v1.0.zip).
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This short empirical paper investigates a snapshot of about two million files from a continuously updated big data collection maintained by F-Secure for security intelligence purposes. By further augmenting the snapshot with open data covering about a half of a million files, the paper examines two questions: (a) what is the shape of a probability distribution characterizing the relative share of malware files to all files distributed from web-facing Internet domains, and (b) what is the distribution shaping the popularity of malware files? A bimodal distribution is proposed as an answer to the former question, while a graph theoretical definition for the popularity concept indicates a long-tailed, extreme value distribution. With these two questions - and the answers thereto, the paper contributes to the attempts to understand large-scale characteristics of malware at the grand population level - at the level of the whole Internet.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Contribution to journal › Editorial › Scientific
Open Source Software development often resembles Agile models. In this paper, we report about our experience in using SCRUM for the development of an Open Source Software Java tool. With this work, we aim at answering the following research questions: 1) is it possible to switch successfully to the SCRUM methodology in an ongoing Open Source Software development process? 2) is it possible to apply SCRUM when the developers are geographically distributed? 3) does SCRUM help improve the quality of the product and the productivity of the process? We answer to these questions by identifying a set of measures and by comparing the data we collected before and after the introduction of SCRUM. The results seem to show that SCRUM can be introduced and used in an ongoing geographically distributed Open Source Software process and that it helps control the development process better.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A diversity of wireless technologies will collaborate to support the fifth-generation (5G) communication networks with their demanding applications and services. Despite decisive progress in many enabling solutions, next-generation cellular deployments may still suffer from a glaring lack of bandwidth due to inefficient utilization of radio spectrum, which calls for immediate action. To this end, several capable frameworks have recently emerged to all help the mobile network operators (MNOs) leverage the abundant frequency bands that are utilized lightly by other incumbents. Along these lines, the recent Licensed Shared Access (LSA) regulatory framework allows for controlled sharing of spectrum between an incumbent and a licensee, such as the MNO, which coexist geographically. This powerful concept has been subject to several early technology demonstrations that confirm its implementation feasibility. However, the full potential of LSA-based spectrum management can only become available if it is empowered to operate dynamically and at high space-Time-frequency granularity. Complementing the prior efforts, we in this work outline the functionality that is required by the LSA system to achieve the much needed flexible operation as well as report on the results of our respective live trial that employs a full-fledged commercial-grade cellular network deployment. Our practical results become instrumental to facilitate more dynamic bandwidth sharing and thus promise to advance on the degrees of spectrum utilization in future 5G systems without compromising the service quality of their users.
INT=elt, "Ponomarenko-Timofeev, Aleksey"
Research output: Contribution to journal › Article › Scientific › peer-review
The Liquid Software metaphor refers to software that can operate seamlessly across multiple devices owned by one or multiple users. Liquid Software applications can take advantage of the computing, storage and communication resources available on all the devices owned by the user. Liquid Software applications can also dynamically migrate from one device to another, following the user’s attention and usage context. The key design goal in Liquid Software development is to minimize the additional efforts arising from multiple device ownership (e.g., installation, synchronization and general maintenance of personal computers, smartphones, tablets, home and car displays, and wearable devices), while keeping the users in full control of their devices, applications and data. In this paper we present the design space for Liquid Software, categorizing and discussing the most important architectural dimensions and technical choices. We also provide an introduction and comparison of two frameworks implementing Liquid Software capabilities in the context of the World Wide Web.
EXT="Mikkonen, Tommi"
EXT="Taivalsaari, Antero"
Research output: Contribution to journal › Article › Scientific › peer-review
Background. Architectural smells and code smells are symptoms of bad code or design that can cause different quality problems, such as faults, technical debt, or difficulties with maintenance and evolution. Some studies show that code smells and architectural smells often appear together in the same file. The correlation between code smells and architectural smells, however, is not clear yet; some studies on a limited set of projects have claimed that architectural smells can be derived from code smells, while other studies claim the opposite. Objective. The goal of this work is to understand whether architectural smells are independent from code smells or can be derived from a code smell or from one category of them. Method. We conducted a case study analyzing the correlations among 19 code smells, six categories of code smells, and four architectural smells. Results. The results show that architectural smells are correlated with code smells only in a very low number of occurrences and therefore cannot be derived from code smells. Conclusion. Architectural smells are independent from code smells, and therefore deserve special attention by researchers, who should investigate their actual harmfulness, and practitioners, who should consider whether and when to remove them.
Research output: Contribution to journal › Article › Scientific › peer-review
Large-scale perturbation databases, such as ConnectivityMap (CMap) or Library of Integrated Network-based Cellular Signatures (LINCS), provide enormous opportunities for computational pharmacogenomics and drug design. A reason for this is that in contrast to classical pharmacology focusing at one target at a time, the transcriptomics profiles provided by CMap and LINCS open the door for systems biology approaches on the pathway and network level. In this article, we provide a review of recent developments in computational pharmacogenomics with respect to CMap and LINCS and related applications.
Research output: Contribution to journal › Article › Scientific › peer-review
Artificial Intelligence (AI) is one of the current emerging technologies. In the history of computing AI has been in the similar role earlier - almost every decade since the 1950s, when the programming language Lisp was invented and used to implement self-modifying applications. The second time that AI was described as one of the frontier technologies was in the 1970s, when Expert Systems (ES) were developed. A decade later AI was again at the forefront when the Japanese government initiated its research and development effort to develop an AI-based computer architecture called the Fifth Generation Computer System (FGCS). Currently in the 2010s, AI is again on the frontier in the form of (self-)learning systems manifesting in robot applications, smart hubs, intelligent data analytics, etc. What is the reason for the cyclic reincarnation of AI? This paper gives a brief description of the history of AI and also answers the question above. The current AI “cycle” has the capability to change the world in many ways. In the context of the CE conference, it is important to understand the changes it will cause in education, the skills expected in different professions, and in society at large.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, a contiguous carrier aggregation scheme for the downlink transmissions in an inband full-duplex cellular network is analyzed. In particular, we consider a scenario where the base station transmits over a wider bandwidth than the mobiles, while both parties are still using the same center frequency. As a result, the mobiles must cancel their own self-interference over a wider bandwidth, when compared to a situation where the uplink and downlink frequency bands are symmetric. Furthermore, due to the inherent RF impairments in the mobile devices, nonlinear modeling of the self-interference is required in the digital domain to fully cancel it over the whole reception bandwidth. The feasibility of the proposed scheme is demonstrated with real-life RF measurements, using two different bandwidths. In both of these cases, it is shown that the SI can be attenuated below the receiver noise floor over the whole reception bandwidth.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The difficulty of learning tasks is a major factor in learning, as is the feedback given to students. Even automatic feedback should ideally be influenced by student-dependent factors such as task difficulty. We report on a preliminary exploration of such indicators of programming assignment difficulty that can be automatically detected for each student from source code snapshots of the student's evolving code. Using a combination of different metrics emerged as a promising approach. In the future, our results may help provide students with personalized automatic feedback.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
RVC-CAL is an actor-based dataflow language that enables concurrent, modular and portable description of signal processing algorithms. RVC-CAL programs can be compiled to implementation languages such as C/C++ and VHDL for producing software or hardware implementations. This paper presents a methodology for automatic discovery of piecewise-deterministic (quasi-static) execution schedules for RVC-CAL program software implementations. Quasi-static scheduling moves computational burden from the implementable run-time system to design-time compilation and thus enables making signal processing systems more efficient. The presented methodology divides the RVC-CAL program into segments and hierarchically detects quasi-static behavior from each segment: first at the level of actors and later at the level of the whole segment. Finally, a code generator creates a quasi-statically scheduled version of the program. The impact of segment based quasi-static scheduling is demonstrated by applying the methodology to several RVC-CAL programs that execute up to 58 % faster after applying the presented methodology.
Research output: Contribution to journal › Article › Scientific › peer-review
We analyze barriers to task-based information access in molecular medicine, focusing on research tasks, which provide task performance sessions of varying complexity. Molecular medicine is a relevant domain because it offers thousands of digital resources as the information environment. Data were collected through shadowing of real work tasks. Thirty work task sessions were analyzed and barriers in these identified. The barriers were classified by their character (conceptual, syntactic, and technological) and by their context of appearance (work task, system integration, or system). Also, work task sessions were grouped into three complexity classes and the frequency of barriers of varying types across task complexity levels were analyzed. Our findings indicate that although most of the barriers are on system level, there is a quantum of barriers in integration and work task contexts. These barriers might be overcome through attention to the integrated use of multiple systems at least for the most frequent uses. This can be done by means of standardization and harmonization of the data and by taking the requirements of the work tasks into account in system design and development, because information access is seldom an end itself, but rather serves to reach the goals of work tasks.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper provides an analytic performance evaluation of the bit error rate (BER) of underlay decode-and-forward cognitive networks with best relay selection over Rayleigh multipath fading channels. A generalized BER expression valid for arbitrary operational parameters is firstly presented in the form of a single integral, which is then employed for determining the diversity order and coding gain for different best relay selection scenarios. Furthermore, a novel and highly accurate closed-form approximate BER expression is derived for the specific case where relays are located relatively close to each other. The presented results are rather convenient to handle both analytically and numerically, while they are shown to be in good agreement with results from respective computer simulations. In addition, it is shown that as in the case of conventional relaying networks, the behaviour of underlay relaying cognitive networks with best relay selection depends significantly on the number of involved relays.
Research output: Contribution to journal › Article › Scientific › peer-review
The blockchain technology is currently penetrating different sides of modern ICT community. Most of the devices involved in blockchain-related processes are specially designed targeting only the mining aspect. At the same time, the use of wearable and mobile devices may also become a part of blockchain operation, especially during the charging time. The paper considers the possibility of using a large number of constrained devices supporting the operation of the blockchain. The utilization of such devices is expected to improve the efficiency of the system and also to attract a more substantial number of users. Authors propose a novel consensus algorithm based on a combination of Proof-of-Work (PoW), Proof-of-Activity (PoA), and Proof-of-Stake (PoS). The paper first overviews the existing strategies and further describes the developed cryptographic primitives used to build a blockchain involving mobile devices. A brief numerical evaluation of the designed system is also provided in the paper.
EXT="Zhidanov, Konstantin"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The paper reports a test exploring how retrieved documents are browsed. The access point to the documents was varied - starting either from the beginning of the document or from the point where relevant information is located - to find out how much browsing and context the users need to judge relevance. Test results reveal different within-document browsing patterns.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The software industry is in the middle of a major change in the fashion how services provided by all kinds of information systems are offered to the users. This change has been initiated by the customers who no longer want to carry out the same responsibilities and risks they previously did as system owners. Consequently, the software vendors need to find a way to change their mind-sets from software developers to service providers, being able to constantly satisfy the changed and new needs of their customers. The transformation from the license based software development to a SaaS offering poses challenges related not only to technical issues but to a great extent also to organisational and even mental issues. We reflect the experiences on this transformation gathered from two software companies, and, based on these, present some prerequisites and guidelines for the transformation to succeed. In conclusion, with the SaaS model, many of the principles manifested by the agile movement can and should be followed closely and the advantages gained with the SaaS model are very close to the objectives set by the agile manifesto.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper considers a scenario when we have multiple pre-trained detectors for detecting an event and a small dataset for training a combined detection system. We build the combined detector as a Boolean function of thresholded detector scores and implement it as a binary classification cascade. The cascade structure is computationally efficient by providing the possibility to early termination. For the proposed Boolean combination function, the computational load of classification is reduced whenever the function becomes determinate before all the component detectors have been utilized. We also propose an algorithm, which selects all the needed thresholds for the component detectors within the proposed Boolean combination. We present results on two audio-visual datasets, which prove the efficiency of the proposed combination framework. We achieve state-of-the-art accuracy with substantially reduced computation time in laughter detection task, and our algorithm finds better thresholds for the component detectors within the Boolean combination than the other algorithms found in the literature.
Research output: Contribution to journal › Article › Scientific › peer-review
Context: Global software development (GSD), although now a norm in the software industry, carries with it enormous challenges mostly regarding communication and coordination. Aforementioned challenges are highlighted when there is a need to transfer knowledge between sites, particularly when software artifacts assigned to different sites depend on each other. The design of the software architecture and associated task dependencies play a major role in reducing some of these challenges. Objective: The current literature does not provide a cohesive picture of how the distributed nature of software development is taken into account during the design phase: what to avoid, and what works in practice. The objective of this paper is to gain an understanding of software architecting in the context of GSD, in order to develop a framework of challenges and solutions that can be applied in both research and practice. Method: We conducted a systematic literature review (SLR) that synthesises (i) challenges which GSD imposes on software architecture design, and (ii) recommended practices to alleviate these challenges. Results: We produced a comprehensive set of guidelines for performing software architecture design in GSD based on 55 selected studies. Our framework comprises nine key challenges with 28 related concerns, and nine recommended practices, with 22 related concerns for software architecture design in GSD. These challenges and practices were mapped to a thematic conceptual model with the following concepts: Organization (Structure and Resources), Ways of Working (Architecture Knowledge Management, Change Management and Quality Management), Design Practices, Modularity and Task Allocation. Conclusion: The synthesis of findings resulted in a thematic conceptual model of the problem area, a mapping of the key challenges to practices, and a concern framework providing concrete questions to aid the design process in a distributed setting. This is a first step in creating more concrete architecture design practices and guidelines.
Research output: Contribution to journal › Article › Scientific › peer-review
The unprecedented proliferation of smart devices together with novel communication, computing, and control technologies have paved the way for A-IoT. This development involves new categories of capable devices, such as high-end wearables, smart vehicles, and consumer drones aiming to enable efficient and collaborative utilization within the smart city paradigm. While massive deployments of these objects may enrich people's lives, unauthorized access to said equipment is potentially dangerous. Hence, highly secure human authentication mechanisms have to be designed. At the same time, human beings desire comfortable interaction with the devices they own on a daily basis, thus demanding authentication procedures to be seamless and user-friendly, mindful of contemporary urban dynamics. In response to these unique challenges, this work advocates for the adoption of multi-factor authentication for A-IoT, such that multiple heterogeneous methods - both well established and emerging - are combined intelligently to grant or deny access reliably. We thus discuss the pros and cons of various solutions as well as introduce tools to combine the authentication factors, with an emphasis on challenging smart city environments. We finally outline the open questions to shape future research efforts in this emerging field.
Research output: Contribution to journal › Article › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Professional
Digital Rights Management (DRM) is an important business enabler for digital content industry. Rights exporting is one of the crucial tasks in providing the interoperability of DRM. Trustworthy rights exporting is required by both the end users and the DRM systems. We propose a set of principles for trustworthy rights exporting by analysing the characteristic of rights exporting. Based on the principles, we provide some suggestions on how trustworthy rights exporting should be performed.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this preliminary research we examine the suitability of hierarchical strategies of multi-class support vector machines for classification of induced pluripotent stem cell (iPSC) colony images. The iPSC technology gives incredible possibilities for safe and patient specific drug therapy without any ethical problems. However, growing of iPSCs is a sensitive process and abnormalities may occur during the growing process. These abnormalities need to be recognized and the problem returns to image classification. We have a collection of 80 iPSC colony images where each one of the images is prelabeled by an expert to class bad, good or semigood. We use intensity histograms as features for classification and we evaluate histograms from the whole image and the colony area only having two datasets. We perform two feature reduction procedures for both datasets. In classification we examine how different hierarchical constructions effect the classification. We perform thorough evaluation and the best accuracy was around 54% obtained with the linear kernel function. Between different hierarchical structures, in many cases there are no significant changes in results. As a result, intensity histograms are a good baseline for the classification of iPSC colony images but more sophisticated feature extraction and reduction methods together with other classification methods need to be researched in future.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The programming capabilities of the Web can be viewed as an afterthought, designed originally by non-programmers for relatively simple scripting tasks. This has resulted in cornucopia of partially overlapping options for building applications. Depending on one’s viewpoint, a generic standards-compatible web browser supports three, four or five built-in application rendering and programming models. In this paper, we give an overview and comparison of these built-in client-side web application architectures in light of the established software engineering principles. We also reflect on our earlier work in this area, and provide an expanded discussion of the current situation. In conclusion, while the dominance of the base HTML/CSS/JS technologies cannot be ignored, we expect Web Components and WebGL to gain more popularity as the world moves towards increasingly complex web applications, including systems supporting virtual and augmented reality.
EXT="Taivalsaari, Antero"
EXT="Mikkonen, Tommi"
jufoid=71106
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The divergence similarity between two color images is presented based on the Jensen-Shannon divergence to measure the color-distribution similarity. Subjective assessment experiments were developed to obtain mean opinion scores (MOS) of test images. It was found that the divergence similarity and MOS values showed statistically significant correlations.
JUFOID=72850
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Background: Molecular descriptors have been extensively used in the field of structure-oriented drug design and structural chemistry. They have been applied in QSPR and QSAR models to predict ADME-Tox properties, which specify essential features for drugs. Molecular descriptors capture chemical and structural information, but investigating their interpretation and meaning remains very challenging.Results: This paper introduces a large-scale database of molecular descriptors called COMMODE containing more than 25 million compounds originated from PubChem. About 2500 DRAGON-descriptors have been calculated for all compounds and integrated into this database, which is accessible through a web interface at http://commode.i-med.ac.at.
Research output: Contribution to journal › Article › Scientific › peer-review
Context: Eliciting requirements from customers is a complex task. In Agile processes, the customer talks directly with the development team and often reports requirements in an unstructured way. The requirements elicitation process is up to the developers, who split it into user stories by means of different techniques. Objective: We aim to compare the requirements decomposition process of an unstructured process and three Agile processes, namely XP, Scrum, and Scrum with Kanban. Method: We conducted a multiple case study with a replication design, based on the project idea of an entrepreneur, a designer with no experience in software development. Four teams developed the project independently, using four different development processes. The requirements were elicited by the teams from the entrepreneur, who acted as product owner and was available to talk with the four groups during the project. Results: The teams decomposed the requirements using different techniques, based on the selected development process. Conclusion: Scrum with Kanban and XP resulted in the most effective processes from different points of view. Unexpectedly, decomposition techniques commonly adopted in traditional processes are still used in Agile processes, which may reduce project agility and performance. Therefore, we believe that decomposition techniques need to be addressed to a greater extent, both from the practitioners’ and the research points of view.
jufoid=71106
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer’s disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.
EXT="Tohka, Jussi"
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper we present a complex elevator system design structure matrix (DSM). The DSM is created with system experts to enable solving of complex system development problems via a product DSM. This data is created to be used as a case study in a DSM design sprint. It was created to show the diversity of findings that can be ascertained from a single DSM matrix. In the spirit of open science, we present both the DSM and the design sprint to enable other researched to replicate, reproduce or otherwise build on the same source of data.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A method to adjust the mean-squared-errors (MSE) value for coded video quality assessment is investigated in this work by incorporating subjective human visual experience. First, we propose a linear model between the mean opinioin score (MOS) and a logarithmic function of the MSE value of coded video under a range of coding rates. This model is validated by experimental data. With further simplification, this model contains only one parameter to be determined by video characteristics. Next, we adopt a machine learing method to learn this parameter. Specifically, we select features to classify video content into groups, where videos in each group are more homoegeneous in their characteristics. Then, a proper model parameter can be trained and predicted within each video group. Experimental results on a coded video database are given to demonstrate the effectiveness of the proposed algorithm.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The paper considers the possible use of computer vision systems for INS aiding. Two methods of navigation data obtaining from the image sequence are analyzed. The first method uses the features of architectural elements in indoor and urban conditions for generation of object attitude parameters. The second method is based on extraction of general features in the image and is more widely applied. Besides the orientation parameters, the second method estimates the object displacement, and thus can be used as visual odometry technique. The described algorithms can be used to develop small-sized MEMS navigation systems efficiently operating in urban conditions.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
At present, cellular coverage in many rural areas remains intermittent. Mobile operators may not be willing to deploy expensive network infrastructure to support low-demand regions. For that reason, solutions for the rapid deployment of base stations in areas with insufficient or damaged operator infrastructure are emerging. Utilization of unmanned aerial vehicles (UAVs) or drones serving as data relays holds significant promise for delivering on-demand connectivity as well as providing public safety services or aiding in recovery after communication infrastructure failures caused by natural disasters. The use of UAVs in provisioning high-rate radio connectivity and bringing it to remote locations is also envisioned as a potential application for fifth-generation (5G) communication systems. In this study, we introduce a prototype solution for an aerial base station, where connectivity between a drone and a base station is provided via a directional microwave link. Our prototype is equipped with a steering mechanism driven by a dedicated algorithm to support such connectivity. Our experimental results demonstrate early-stage connectivity and signal strength measurements that were gathered with our prototype. Our results are also compared against the free-space model. These findings support the emerging vision of aerial base stations as part of the 5G ecosystem and beyond.
EXT="Pyattaev, Alexander"
Research output: Contribution to journal › Article › Scientific › peer-review
Passenger transport is becoming more and more connected and multimodal. Instead of just taking a series of vehicles to complete a journey, the passenger is actually interacting with a connected cyber-physical social (CPS) transport system. In this study, we present a case study where big data from various sources is combined and analyzed to support and enhance the transport system in the Tampere region. Different types of static and real-time data sources and transportation related APIs are investigated. The goal is to find ways in which big data and collaborative networks can be used to improve the CPS transport system itself and the passenger satisfaction related to it. The study shows that even though the exploitation of big data does not directly improve the state of the physical transport infrastructure, it helps in utilizing more of its capacity. Secondly, the use of big data makes it more attractive to passengers.
jufoid=84293
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This publication addresses two bottlenecks in the construction of minimal coverability sets of Petri nets: the detection of situations where the marking of a place can be converted to ω, and the manipulation of the set A of maximal ω-markings that have been found so far. For the former, a technique is presented that consumes very little time in addition to what maintaining A consumes. It is based on Tarjan's algorithm for detecting maximal strongly connected components of a directed graph. For the latter, a data structure is introduced that resembles BDDs and Covering Sharing Trees, but has additional heuristics designed for the present use. Results from a few experiments are shown. They demonstrate significant savings in running time and varying savings in memory consumption compared to an earlier state-of-the-art technique.
Research output: Contribution to journal › Article › Scientific › peer-review
Purpose: The current study aims to investigate if different measures related to online psychosocial well-being and online behavior correlate with social media fatigue.
Design/methodology/approach: To understand the antecedents and consequences of social media fatigue, the stressor-strain-outcome (SSO) framework is applied. The study consists of two cross-sectional surveys that were organized with young-adult students. Study A was conducted with 1,398 WhatsApp users (aged 19 to 27 years), while Study B was organized with 472 WhatsApp users (aged 18 to 23 years).
Findings: Intensity of social media use was the strongest predictor of social media fatigue. Online social comparison and self-disclosure were also significant predictors of social media fatigue. The findings also suggest that social media fatigue further contributes to a decrease in academic performance.
Originality/value: This study builds upon the limited yet growing body of literature on a theme highly relevant for scholars, practitioners as well as social media users. The current study focuses on examining different causes of social media fatigue induced through the use of a highly popular mobile instant messaging app, WhatsApp. The SSO framework is applied to explore and establish empirical links between stressors and social media fatigue.
Research output: Contribution to journal › Article › Scientific › peer-review
Dictionary learning is usually approached by looking at the support of the sparse representations. Recent years have shown results in dictionary improvement by investigating the cosupport via the analysis-based cosparse model. In this paper we present a new cosparse learning algorithm for orthogonal dictionary blocks that provides significant dictionary recovery improvements and representation error shrinkage. Furthermore, we show the beneficial effects of using this algorithm inside existing methods based on building the dictionary as a structured union of orthonormal bases.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This article examines some of the ethical issues that engineers face in developing bio-protection systems for smart buildings. An innovative approach based on four different containment strategies is used to identify these issues. Subsequent analysis shows that, whilst smart buildings have the potential to prioritize the safety of the group over that of individuals, the practical and ethical implementation of such containment strategies would require systems account for the uncertainty over the clinical state of each individual occupant.
Research output: Other conference contribution › Paper, poster or abstract › Scientific
This paper introduces the Resource Interface ontology intended to formally capture hardware interface information of production resources. It also proposes an interface matchmaking method, which uses this information to judge if two resources can be physically connected with each other. The matchmaking method works on two levels of detail, coarse and fine. The proposed Resource Interface ontology and matchmaking method can be utilised during production system design or reconfiguration by system integrators or end users. They will benefit from fast and automatic resource searches over large resource catalogues. In the end of the paper, a validation of the method is provided with a test ontology.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Full use of the parallel computation capabilities of present and expected CPUs and CPUs require use of vector extensions. Yet many actors in data flow systems for digital signal processing have internal state (or, equivalently, an edge that loops from the actor back to itself) that impose serial dependencies between actor invocations that make vectorizing across actor invocations impossible. Ideally, issues of inter-thread coordination required by serial data dependencies should be handled by code written by parallel programming experts that is separate from code specifying signal processing operations. The purpose of this paper is to present one approach for so doing in the case of actors that maintain state. We propose a methodology for using the parallel scan (also known as prefix sum) pattern to create algorithms for multiple simultaneous invocations of such an actor that results in vectorizable code. Two examples of applying this methodology are given: (1) infinite impulse response filters and (2) finite state machines. The correctness and performance of the resulting IIR filters are studied.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Full use of the parallel computation capabilities of present and expected CPUs and GPUs requires use of vector extensions. Yet many actors in data flow systems for digital signal processing have internal state (or, equivalently, an edge that loops from the actor back to itself) that impose serial dependencies between actor invocations that make vectorizing across actor invocations impossible. Ideally, issues of inter-thread coordination required by serial data dependencies should be handled by code written by parallel programming experts that is separate from code specifying signal processing operations. The purpose of this paper is to present one approach for so doing in the case of actors that maintain state. We propose a methodology for using the parallel scan (also known as prefix sum) pattern to create algorithms for multiple simultaneous invocations of such an actor that results in vectorizable code. Two examples of applying this methodology are given: (1) infinite impulse response filters and (2) finite state machines. The correctness and performance of the resulting IIR filters and one class of FSMs are studied.
Research output: Contribution to journal › Article › Scientific › peer-review
In data warehousing, business driven development defines data requirements to fulfill reporting needs. A data warehouse stores current and historical data in one single place. Data warehouse architecture consists of several layers and each has its own purpose. A staging layer is a data storage area to assists data loadings, a data vault modelled layer is the persistent storage that integrates data and stores the history, whereas publish layer presents data using a vocabulary that is familiar to the information users. By following the process which is driven by business requirements and starts with publish layer structure, this creates a situation where manual work requires a specialist, who knows the data vault model. Our goal is to reduce the number of entities that can be selected in a transformation so that the individual developer does not need to know the whole solution, but can focus on a subset of entities (partial schema). In this paper, we present two different schema matchers, one based on attribute names, and another based on data flow mapping information. Schema matching based on data flow mappings is a novel addition to current schema matching literature. Through the example of Northwind, we show how these two different matchers affect the formation of a partial schema for transformation source entities. Based on our experiment with Northwind we conclude that combining schema matching algorithms produces correct entities in the partial schema.
jufoid=71106
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The transport sector is constantly growing as well as its complexity and energy consumption. One way to reduce the involvement and the volume of data to evaluate and monitor the energy efficiency of the sector for cities authorities is by using Key Performance Indicators (KPIs). This paper describes a set of KPIs to measure and track energy efficiency in the transport sector. The KPIs that are summarized in this paper were identified based on a literature review of mobility projects/strategies/policies that had been implemented in cities around the world. Future applications, which are presented at the end of this article, will give a better understanding of the systems and its components.
AUX=ase,"Mantilla, R. M Fernanda"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Dataflow programming has received increasing attention in the age of multicore and heterogeneous computing. Modular and concurrent dataflow program descriptions enable highly automated approaches for design space exploration, optimization and deployment of applications. A great advance in dataflow programming has been the recent introduction of the RVC-CAL language. Having been standardized by the ISO, the RVC-CAL dataflow language provides a solid basis for the development of tools, design methodologies and design flows. This paper proposes a novel design flow for mapping RVC-CAL dataflow programs to parallel and heterogeneous execution platforms. Through the proposed design flow the programmer can describe an application in the RVC-CAL language and map it to multi- and many-core platforms, as well as GPUs, for efficient execution. The functionality and efficiency of the proposed approach is demonstrated by a parallel implementation of a video processing application and a run-time reconfigurable filter for telecommunications. Experiments are performed on GPU and multicore platforms with up to 16 cores, and the results show that for high-performance applications the proposed design flow provides up to 4 × higher throughput than the state-of-the-art approach in multicore execution of RVC-CAL programs.
Research output: Contribution to journal › Article › Scientific › peer-review
Research output: Contribution to journal › Article › Scientific › peer-review
Browsers have become the most common communication channel. We spend hours using them to get news and communicate with friends, far more time than communicating face-to face. WWW-based communication and content-creation for www will be the most common job in future work life for students specializing in software engineering. We expect our screens to be colorful and animated, thus students should understand technologies, which are used for e.g. for painting jumping Mario to screen. But massive flow of new software engineering ideas, technologies and frameworks which appear in all-increasing temp tend to make students passive receivers of descriptions of new menus and commands without giving them any possibility to investigate and understand, what is behind these menus and commands, killing their natural curiosity. There should be time to experiment, compare formats, technologies and investigate their relations. In the presentation are described experiments used for investigating, how different formats for describing animation in HTML5 document influence animation rendering speed.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Context: DevOps is considered important in the ability to frequently and reliably update a system in operational state. DevOps presumes cross-functional collaboration and automation between software development and operations. DevOps adoption and implementation in companies is non-trivial due to required changes in technical, organisational and cultural aspects. Objectives: This exploratory study presents detailed descriptions of how DevOps is implemented in practice. The context of our empirical investigation is web application and service development in small and medium sized companies. Method: A multiple-case study was conducted in five different development contexts with successful DevOps implementations since its benefits, such as quick releases and minimum deployment errors, were achieved. Data was mainly collected through interviews with 26 practitioners and observations made at the companies. Data was analysed by first coding each case individually using a set of predefined themes and thereafter perform a cross-case synthesis. Results: Our analysis yielded some of the following results: (i) software development team attaining ownership and responsibility to deploy software changes in production is crucial in DevOps. (ii) toolchain usage and support in deployment pipeline activities accelerates the delivery of software changes, bug fixes and handling of production incidents. (ii) the delivery speed to production is affected by context factors, such as manual approvals by the product owner (iii) steep learning curve for new skills is experienced by both software developers and operations staff, who also have to cope with working under pressure. Conclusion: Our findings contributes to the overall understanding of DevOps concept, practices and its perceived impacts, particularly in small and medium sized companies. We discuss two practical implications of the results.
EXT="Mikkonen, Tommi"
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper, we present a high data rate implementation of a digital predistortion (DPD) algorithm on a modern mobile multicore CPU containing an on-chip GPU. The proposed implementation is capable of running in real-time, thanks to the execution of the predistortion stage inside the GPU, and the execution of the learning stage on a separate CPU core. This configuration, combined with the low complexity DPD design, allows for more than 400 Msamples/s sample rates. This is sufficient for satisfying 5G new radio (NR) base station radio transmission specifications in the sub-6 GHz bands, where signal bandwidths up to 100 MHz are specified. The linearization performance is validated with RF measurements on two base station power amplifiers at 3.7 GHz, showing that the 5G NR downlink emission requirements are satisfied.
INT=comp,"Meirhaeghe, Alexandre"
Research output: Contribution to journal › Article › Scientific › peer-review
This article presents results on how students became engaged and motivated when using digital storytelling in knowledge creation in Finland, Greece and California. The theoretical framework is based on sociocultural theories. Learning is seen as a result of dialogical interactions between people, substances and artefacts. This approach has been used in the creation of the Global Sharing Pedagogy (GSP) model for the empirical study of student levels of engagement in learning twenty-first century skills. This model presents a set of conceptual mediators for student-driven knowledge creation, collaboration, networking and digital literacy. Data from 319 students were collected using follow-up questionnaires after the digital storytelling project. Descriptive statistical methods, correlations, analysis of variance and regression analysis were used. The mediators of the GSP model strongly predicted student motivation and enthusiasm as well as their learning outcomes. The digital storytelling project, using the technological platform Mobile Video Experience (MoViE), was very successful in teaching twenty-first century skills.
Research output: Contribution to journal › Article › Scientific › peer-review
Increasingly, researchers have come to acknowledge that consumption activities entail both utilitarian and hedonic components. Whereas utilitarian consumption accentuates the achievement of predetermined outcomes typical of cognitive consumer behavior, its hedonic counterpart relates to affective consumer behavior in dealing with the emotive and multisensory aspects of the shopping experience. Consequently, while utilitarian consumption activities appeal to the rationality of customers in inducing their intellectual buy-in of the shopping experience, customers’ corresponding emotional buy-in can only be attained through the presence of hedonic consumption activities. The same can be said for online shopping. Because the online shopping environment is characterized by the existence of an IT-enabled web interface that acts as the focal point of contact between customers and vendors, its design should embed utilitarian and hedonic elements to create a holistic shopping experience. Building on Expectation Disconfirmation Theory (EDT), this study advances a research model that not only delineates between customers’ utilitarian and hedonic expectations for online shopping but also highlights how these expectations can be best served through functional and esthetic performance, respectively. Furthermore, we introduce online shopping experience (i.e., transactional frequency) as a moderator affecting not only how customers form utilitarian and hedonic expectations but also how they evaluate the functional and esthetic performances of e-commerce sites. The model is then empirically validated via an online survey questionnaire administered on a sample of 303 respondents. Theoretical contributions and pragmatic implications to be gleaned from our research model and its subsequent empirical validation are discussed.
Research output: Contribution to journal › Article › Scientific › peer-review
Since the birth of computer and networks, fuelled by pervasive computing, Internet of Things and ubiquitous connectivity, the amount of data stored and transmitted has exponentially grown through the years. Due to this demand, new storage solutions are needed. One promising media is the DNA as it provides numerous advantages, which includes the ability to store dense information while achieving long-term reliability. However, the question as to how the data can be retrieved from a DNA-based archive, still remains. In this paper, we aim to address this question by proposing a new storage solution that relies on bacterial nanonetworks properties. Our solution allows digitally-encoded DNA to be stored into motility-restricted bacteria, which compose an archival architecture of clusters, and to be later retrieved by engineered motile bacteria, whenever reading operations are needed. We conducted extensive simulations, in order to determine the reliability of data retrieval from motility-restricted storage clusters, placed spatially at different locations. Aiming to assess the feasibility of our solution, we have also conducted wet lab experiments that show how bacteria nanonetworks can effectively retrieve a simple message, such as "Hello World", by conjugation with motility-restricted bacteria, and finally mobilize towards a target point for delivery.
Research output: Contribution to journal › Article › Scientific › peer-review
Background: Pull requests are a common practice for making contributions and reviewing them in both open-source and industrial contexts.
Objective: Our goal is to understand whether quality flaws such as code smells, anti-patterns, security vulnerabilities, and coding style violations in a pull request's code affect the chance of its acceptance when reviewed by a maintainer of the project.
Method: We conducted a case study among 28 Java open-source projects, analyzing the presence of 4.7 M code quality flaws in 36 K pull requests. We analyzed further correlations by applying logistic regression and six machine learning techniques. Moreover, we manually validated 10% of the pull requests to get further qualitative insights on the importance of quality issues in cases of acceptance and rejection.
Results: Unexpectedly, quality flaws measured by PMD turned out not to affect the acceptance of a pull request at all. As suggested by other works, other factors such as the reputation of the maintainer and the importance of the delivered feature might be more important than other qualities in terms of pull request acceptance.
Conclusions:. Researchers have already investigated the influence of the developers’ reputation and the pull request acceptance. This is the first work investigating code style violations and specifically PMD rules. We recommend that researchers further investigate this topic to understand if different measures or different tools could provide some useful measures.
EXT="Lenarduzzi, Valentina"
INT=comp,"Nikkola, Vili"
Research output: Contribution to journal › Article › Scientific › peer-review
Background: The migration from a monolithic system to microservices requires a deep refactoring of the system. Therefore, such a migration usually has a big economic impact and companies tend to postpone several activities during this process, mainly to speed up the migration itself, but also because of the demand for releasing new features.
Objective: We monitored the technical debt of an SME while it migrated from a legacy monolithic system to an ecosystem of microservices. Our goal was to analyze changes in the code technical debt before and after the migration to microservices.
Method: We conducted a case study analyzing more than four years of the history of a twelve-year-old project (280K Lines of Code) where two teams extracted five business processes from the monolithic system as microservices. For the study, we first analyzed the technical debt with SonarQube and then performed a qualitative study with company members to understand the perceived quality of the system and the motivation for possibly postponed activities.
Results: The migration to microservices helped to reduce the technical debt in the long run. Despite an initial spike in the technical debt due to the development of the new microservice, after a relatively short period of time the technical debt tended to grow slower than in the monolithic system.
EXT="Lenarduzzi, Valentina"
Research output: Contribution to journal › Review Article › Scientific › peer-review
More and more information systems (IS) are designed to address a blend of hedonic and utilitarian purposes, and hence become what information system scholars call today “dual systems.” The aim of this research is chiefly to provide a holistic perspective for research done regarding dual IS (i.e., what factors affect users’ adoption and post-adoption of these systems) in order to assess the state of knowledge in this area and to provide a reference point for system designers. To achieve this goal, we started out with a systematic literature review (35 articles), and analyzed the articles in terms of their theoretical background, constructs and findings. The results suggest that there is an increasing number of systems that are regarded as dual (e.g., gamified services, virtual worlds) and that the influential factors can be grouped according to the three dimensions of IS artefacts: information artefact, information technology artefact and social artefact.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The interpretive grounded theory (GT) study analyses information system (IS) enabled organizational change in two private sector organizations. These two organizations, who are long term partners, were developing a new IS product to divergent markets. The data was gathered through 15 interviews, conducted at the phase of initial rollouts. The findings focus on the results of the theoretical coding phase in which selective codes, referred to as change management activities, are related to each other. As a theoretical contribution, the dynamic structure presents how the change management activities appear differently, depending on a set of choices. Several paradoxical situations stemmed from inconsistencies and/or tensions, because the choices did not support the targeted change management activities. The study thus proposes that there is an increasing demand to analyze the sources of paradoxical situations. Paradoxical situations in these five opposing forces were identified: long term vs. short term, macro vs. micro, past vs. future, centralized vs. distributed, and control vs. trust/self-organization. Some paradoxical situations arose because of the nature of the trust-based IS partnership, while others were socially constructed as a result of unintended consequences of actions in relation to the strategic goals. Managerial efforts are increasingly required for identifying paradoxical situations at an early stage and for considering the right balance for the opposing forces in the dynamic IS change process.
Research output: Contribution to journal › Article › Scientific › peer-review
Owing to a steadily increasing demand for efficient spectrum utilization as part of the fifth-generation (5G) cellular concept, it becomes crucial to revise the existing radio spectrum management techniques and provide more flexible solutions for the corresponding challenges. A new wave of spectrum policy reforms can thus be envisaged by producing a paradigm shift from static to dynamic orchestration of shared resources. The emerging Licensed Shared Access (LSA) regulatory framework enables flexible spectrum sharing between a limited number of users that access the same frequency bands, while guaranteeing better interference mitigation. In this work, an advanced user satisfaction-aware spectrum management strategy for dynamic LSA management in 5G networks is proposed to balance both the connected user satisfaction and the Mobile Network Operator (MNO) resource utilization. The approach is based on the MNO decision policy that combines both pricing and rejection rules in the implemented processes. Our study offers a classification built over several types of users, different corresponding attributes, and a number of MNO's decision scenarios. Our investigations are built on Criteria-Based Resource Management (CBRM) framework, which has been specifically designed to facilitate dynamic LSA management in 5G mobile networks. To verify the proposed model, the results (spectrum utilization, estimated Secondary User price for the future connection, and user selection methodology in case of user rejection process) are validated numerically as we yield important conclusions on the applicability of our approach, which may offer valuable guidelines for efficient radio spectrum management in highly dynamic and heterogeneous 5G environments.
Research output: Contribution to journal › Article › Scientific › peer-review
This study structures the ecosystem literature by using a bibliometrical approach in analysing theoretical roots of ecosystem studies. Several disciplines, such as innovation, management and software studies have established own streams in the ecosystem research. This paper reports the results of analysing 601 articles from the Thomson Reuters Web of Science database, and identifies ten separate research communities which have established their own thematic ecosystem disciplines. We show that five sub-communities have emerged inside the field of software ecosystems. The software ecosystem literature draws its theoretical background from (1) technical, (2) research methodology, (3) business, (4) management, and (5) strategy oriented disciplines. The results pave the way for future research by illustrating the existing and missing links and directions in the field of the software ecosystem.
JUFOID=71106
EXT="Hyrynsalmi, Sami"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Technology-orientation and coding are gaining momentum in Finnish curriculum planning for primary and secondary school. However, according to the existing plans, the scope of ICT teaching is limited to practical topics, e.g., how to drill basic control structures (if-then-else, for, while) without focusing on the high level epistemological view of ICT. This paper proposes some key extensions to such plans, targeted to highlight rather the epistemological factors of teaching than talk about concrete means of strengthening the craftsmanship of coding. The proposed approach stems from the qualitative data collected by interviewing ICT professionals (N=7, 4 males, 3 females), who have gained experience of the industry needs while working as ICT professionals (avg=11.3 y, s=3.9 y). This work illustrates a holistic model of ICT teaching as well as suggests a set of new methods and tools.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Today, software teams can deploy new software versions to users at an increasing speed – even continuously. Although this has enabled faster responding to changing customer needs than ever before, the speed of automated customer feedback gathering has not yet blossomed out at the same level. For these purposes, the automated collecting of quantitative data about how users interact with systems can provide software teams with an interesting alternative. When starting such a process, however, teams are faced immediately with difficult decision making: What kind of technique should be used for collecting user-interaction data? In this paper, we describe the reasons for choosing specific collecting techniques in three cases and refine a previously designed selection framework based on their data. The study is a part of on-going design science research and was conducted using case study methods. A few distinct criteria which practitioners valued the most arose from the results.
JUFOID=71106
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
All-encompassing digitalization and the digital skills gap pressure the current school system to change. Accordingly, to 'digi-jump', the Finnish National Curriculum 2014 (FNC-2014) adds programming to K-12 math. However, we claim that the anticipated addition remains too vague and subtle. Instead, we should take into account education recommendations set by computer science organizations, such as ACM, and define clear learning targets for programming. Correspondingly, the whole math syllabus should be critically viewed in the light of these changes and the feedback collected from SW professionals and educators. These findings reveal an imbalance between supply and demand, i.e., what is over-taught versus under-taught, from the point of view of professional requirements. Critics claim an unnecessary surplus of calculus and differential equations, i.e., continuous mathematics. In contrast, the emphasis should shift more towards algorithms and data structures, flexibility in handling multiple data representations, logic; in summary - discrete mathematics.
EXT="Valmari, Antti"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We investigate the decidability of the emptiness problem for three classes of distributed automata. These devices operate on finite directed graphs, acting as networks of identical finite-state machines that communicate in an infinite sequence of synchronous rounds. The problem is shown to be decidable in LOGSPACE for a class of forgetful automata, where the nodes see the messages received from their neighbors but cannot remember their own state. When restricted to the appropriate families of graphs, these forgetful automata are equivalent to classical finite word automata, but strictly more expressive than finite tree automata. On the other hand, we also show that the emptiness problem is undecidable in general. This already holds for two heavily restricted classes of distributed automata: those that reject immediately if they receive more than one message per round, and those whose state diagram must be acyclic except for self-loops. Additionally, to demonstrate the flexibility of distributed automata in simulating different models of computation, we provide a characterization of constraint satisfaction problems by identifying a class of automata with exactly the same computational power.
Research output: Contribution to journal › Article › Scientific › peer-review
The purpose of this research is to examine why organizations with similar objectives and environments at the beginning obtain different outcomes when implementing enterprise architecture (EA) projects and how EA institutionalization process occurs. We conduct a qualitative multiple-case study using the lens of institutional theory through the analysis of intra-organization relations. The results show that the institutional logic of stakeholders can drive EA projects in different directions during the process of EA institutionalization, and thus organizations obtain different project outcomes ultimately. We contribute by extending the knowledge on EA institutionalization from a micro-level perspective, understanding and explaining how the organizational structure was shaped and influenced by stakeholders’ relations, as well as providing insight into stakeholders’ behaviors and activities during the process of EA institutionalization so that practitioners may improve the success rate of EA projects, particularly in the public sector.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper presents an experimental study aimed to investigate the impact of haptic feedback when trying to evaluate quantitatively the topographic heights depicted by height tints. In particular, the accuracy of detecting the heights has been evaluated visually and instrumentally by using the new StickGrip haptic device. The participants were able to discriminate the required heights specified in the scale bar palette and to detect these values within an assigned map region. It was demonstrated that the complementary haptic feedback increased the accuracy of visual estimation of the topographic heights by about 32%.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Local binary pattern (LBP) is a texture operator that is used in several different computer vision applications requiring, in many cases, real-time operation in multiple computing platforms. The irruption of new video standards has increased the typical resolutions and frame rates, which need considerable computational performance. Since LBP is essentially a pixel operator that scales with image size, typical straightforward implementations are usually insufficient to meet these requirements. To identify the solutions that maximize the performance of the real-time LBP extraction, we compare a series of different implementations in terms of computational performance and energy efficiency, while analyzing the different optimizations that can be made to reach real-time performance on multiple platforms and their different available computing resources. Our contribution addresses the extensive survey of LBP implementations in different platforms that can be found in the literature. To provide for a more complete evaluation, we have implemented the LBP algorithms in several platforms, such as graphics processing units, mobile processors and a hybrid programming model image coprocessor. We have extended the evaluation of some of the solutions that can be found in previous work. In addition, we publish the source code of our implementations.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper highlights the performance of single path multiple access (SPMA) and discusses the performance comparison between higher order sectorization and SPMA in a macrocellular environment. The target of this paper is to emphasize the gains and significance of the novel concept of SPMA in achieving better and homogeneous SIR and enhanced system capacity in a macrocellular environment. This paper also explains the algorithm of SIR computation in SPMA. The results presented in this paper are based on sophisticated 3D ray tracing simulations performed with real world 3D building data and site locations from Seoul, South Korea. Macrocellular environment dominated with indoor users was considered for the research purpose of this paper. It is found that by increasing the order of sectorization, SIR along with spectral efficiency degrades due to the increase in inter-cell interference. However, as a result of better area spectral efficiency due to increased number of sectors (cells), the higher order sectorization offers more system capacity compared to the traditional 3-sector site. Furthermore, SPMA shows an outstanding performance and significantly improves the SIR for the individual user over the whole coverage area, and also remarkably increases the system capacity. In the environment under consideration, the simulation results reveal that SPMA can offer approximately 424 times more system capacity compared to the reference case of 3-sector site.
Research output: Contribution to journal › Article › Scientific › peer-review
Cyber-attacks have grown in importance to become a matter of national security. A growing number of states and organisations around the world have been developing defensive and offensive capabilities for cyber warfare. Security criteria are important tools for defensive capabilities of critical communications and information systems (CIS). Various criteria have been developed for designing, implementing and auditing CIS. The paper is based on work done from 2008 to 2016 at FICORA, the Finnish Communications Regulatory Authority. FICORA has actively participated in development and usage of three versions of Katakri, the Finnish national security audit criteria. Katakri is a tool for assessing the capability of an organisation to safeguard classified information. While built for governmental security authorities, usefulness for the private sector has been a central design goal of the criteria throughout its development. Experiences were gathered from hundreds of CIS security audits conducted against all versions of Katakri. Feedback has been gathered also from CIS audit target organisations including governmental authorities and the private sector, from other Finnish security authorities, from FICORA's accredited third party Information Security Inspection Bodies, and from public sources. This paper presents key lessons learnt and discusses recommendations for the design and implementation of security criteria. Security criteria have significant direct impacts on CIS design and implementation. Criteria design is always a trade-off between the varying goals of the target users. Katakri has tried to strike a balance between the different needs for security criteria. The paper recommends that criteria design should stem from a small set of strictly defined use cases. Trying to cover the needs of a wide variety of different use cases quickly renders the criteria useless as an assessment tool. In order to provide sufficient information assurance, security criteria should describe requirements on a reasonably concrete level, but also provide support for the security and risk management processes of the target users.
JUFOID=71915
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Reliability is a very important non-functional aspect for software systems and artefacts. In literature, several definitions of software reliability exist and several methods and approaches exist to measure reliability of a software project. However, in the literature no works focus on the applicability of these methods in all the development phases of real software projects. In this paper, we describe the methodology we adopted during the S-CASE FP7 European Project to predict reliability for both the S-CASE platform as well as for the software artefacts automatically generated by using the S-CASE platform. Two approaches have been adopted to compute reliability: The first one is the Rome Lab Model, a well adopted traditional approach in industry; the second one is an empirical approach defined by the authors in a previous work. An extensive dataset of results has been collected during all the phases of the project. The two approaches can complement each other, to support to prediction of reliability during all the development phases of a software system in order to facilitate the project management from a non-functional point-of-view.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Dataflow descriptions have been used in a wide range of Digital Signal Processing (DSP) applications, such as multi-media processing, and wireless communications. Among various forms of dataflow modeling, Synchronous Dataflow (SDF) is geared towards static scheduling of computational modules, which improves system performance and predictability. However, many DSP applications do not fully conform to the restrictions of SDF modeling. More general dataflow models, such as CAL (Eker and Janneck 2003), have been developed to describe dynamically-structured DSP applications. Such generalized models can express dynamically changing functionality, but lose the powerful static scheduling capabilities provided by SDF. This paper focuses on the detection of SDF-like regions in dynamic dataflow descriptions-in particular, in the generalized specification framework of CAL. This is an important step for applying static scheduling techniques within a dynamic dataflow framework. Our techniques combine the advantages of different dataflow languages and tools, including CAL (Eker and Janneck 2003), DIF (Hsu et al. 2005) and CAL2C (Roquier et al. 2008). In addition to detecting SDF-like regions, we apply existing SDF scheduling techniques to exploit the static properties of these regions within enclosing dynamic dataflow models. Furthermore, we propose an optimized approach for mapping SDF-like regions onto parallel processing platforms such as multi-core processors.
Research output: Contribution to journal › Article › Scientific › peer-review
Multiple radar sensors can be used in collaboration to detect targets in an area of surveillance. In this paper, we consider a case, in which a target is detected by a network of radars producing multiple observations of the radar signature of the target during a short time window. Given that this time window is sufficiently narrow, the observations have a dependence between them momentarily related to the change in the orientation of the target. We propose the fusion of these interdependent observations to aid target identification by forming a joint multi-dimensional histogram of the radar cross section (RCS). In addition, we investigate the criteria for windowing the observations to ensure adequate interdependence. We present a case study to demonstrate the ability of the proposed approach to distinguish between different targets using the measured RCS collected by a multi-radar surveillance system. Based on the experiment, we analyze the criteria for the dynamic windowing and discuss the computational requirements of the proposed concept.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Recent advances in Terrestrial Laser Scanner (TLS), in terms of cost and flexibility, have consolidated this technology as an essential tool for the documentation and digitalization of Cultural Heritage. However, once the TLS data is used, it basically remains stored and left to waste. How can highly accurate and dense point clouds (of the built heritage) be processed for its reuse, especially to engage a broader audience? This paper aims to answer this question by a channel that minimizes the need for expert knowledge, while enhancing the interactivity with the as-built digital data: Virtual Heritage Dissemination through the production of VR content. Driven by the ProDigiOUs project's guidelines on data dissemination (EU funded), this paper advances in a production path to transform the point cloud into virtual stereoscopic spherical images, taking into account the different visual features that produce depth perception, and especially those prompting visual fatigue while experiencing the VR content. Finally, we present the results of the Hiedanranta's scans transformed into stereoscopic spherical animations.
jufoid=83846
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper presents a study design intended to investigate the privacy concerns and benefits related to the adoption of facial payment technology from a privacy calculus perspective. In the proposed research model, relative advantages, including convenience, availability, and security, are considered as perceived benefits in facial payment adoption and assumed to exert a positive influence on the adoption of facial payment. The privacy concern, involving threat appraisal (perceived severity and vulnerability) and coping appraisals (response efficacy and self-efficacy), are articulated as perceived risks. Threat appraisals negatively affect people's intention to use facial payment technology, whereas coping appraisals positively influence their usage. Based on privacy calculus framework, the benefit-risk analysis shapes people's adoption behavior of facial payment technology. In addition, personal innovativeness is set as moderators in the proposed model. This research might contribute to literature on privacy concerns and facial payment technology use, and offer practical implications for facial payment providers.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Professional
Partial order methods alleviate state explosion by considering only a subset of actions in each constructed state. The choice of the subset depends on the properties that the method promises to preserve. Many methods have been developed ranging from deadlock-preserving to CTL(Formula presented.)-preserving and divergence-sensitive branching bisimilarity preserving. The less the method preserves, the smaller state spaces it constructs. Fair testing equivalence unifies deadlocks with livelocks that cannot be exited and ignores the other livelocks. It is the weakest congruence that preserves whether or not the system may enter a livelock that it cannot leave. We prove that a method that was designed for trace equivalence also preserves fair testing equivalence. We demonstrate its effectiveness on a protocol with a connection and data transfer phase. This is the first practical partial order method that deals with a practical fairness assumption.
Research output: Contribution to journal › Article › Scientific › peer-review
Farm detection using low resolution satellite images is an important topic in digital agriculture. However, it has not received enough attention compared to high-resolution images. Although high resolution images are more efficient for detection of land cover components, the analysis of low-resolution images are yet important due to the low-resolution repositories of the past satellite images used for timeseries analysis, free availability and economic concerns. The current paper addresses the problem of farm detection using low resolution satellite images. In digital agriculture, farm detection has significant role for key applications such as crop yield monitoring. Two main categories of object detection strategies are studied and compared in this paper; First, a two-step semi-supervised methodology is developed using traditional manual feature extraction and modelling techniques; the developed methodology uses the Normalized Difference Moisture Index (NDMI), Grey Level Co-occurrence Matrix (GLCM), 2-D Discrete Cosine Transform (DCT) and morphological features and Support Vector Machine (SVM) for classifier modelling. In the second strategy, high-level features learnt from the massive filter banks of deep Convolutional Neural Networks (CNNs) are utilised. Transfer learning strategies are employed for pretrained Visual Geometry Group Network (VGG-16) networks. Results show the superiority of the high-level features for classification of farm regions.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
As it has evolved, the Internet has had to support a broadening range of networking technologies, business models and user interaction modes. Researchers and industry practitioners have realised that this trend necessitates a fundamental rethinking of approaches to network and service management. This has spurred significant research efforts towards developing autonomic network management solutions incorporating distributed self-management processes inspired by biological systems. Whilst significant advances have been made, most solutions focus on management of single network domains and the optimisation of specific management or control processes therein. In this paper we argue that a networking infrastructure providing a myriad of loosely coupled services must inherently support federation of network domains and facilitate coordination of the operation of various management processes for mutual benefit. To this end, we outline a framework for federated management that facilitates the coordination of the behaviour of bio-inspired management processes. Using a case study relating to distribution of IPTV content, we describe how Federal Relationship Managers realising our layered model of management federations can communicate to manage service provision across multiple application/storage/ network providers. We outline an illustrative example in which storage providers are dynamically added to a federation to accommodate demand spikes, with appropriate content being migrated to those providers servers under control of a bio-inspired replication process.
Research output: Contribution to journal › Article › Scientific › peer-review
We present a structural data set of the 20 proteinogenic amino acids and their amino-methylated and acetylated (capped) dipeptides. Different protonation states of the backbone (uncharged and zwitterionic) were considered for the amino acids as well as varied side chain protonation states. Furthermore, we studied amino acids and dipeptides in complex with divalent cations (Ca2+, Ba2+, Sr2+, Cd2+, Pb2+, and Hg2+). The database covers the conformational hierarchies of 280 systems in a wide relative energy range of up to 4 eV (390 kJ/mol), summing up to a total of 45,892 stationary points on the respective potential-energy surfaces. All systems were calculated on equal first-principles footing, applying density-functional theory in the generalized gradient approximation corrected for long-range van der Waals interactions. We show good agreement to available experimental data for gas-phase ion affinities. Our curated data can be utilized, for example, for a wide comparison across chemical space of the building blocks of life, for the parametrization of protein force fields, and for the calculation of reference spectra for biophysical applications.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper summarizes the results of the NATO STO IST Panel's Exploratory Team IST-ET-101. The team studied the full-duplex radio technology as an innovative solution to deal with the scarce and congested electromagnetic frequency spectrum, especially in the VHF and UHF bands. This scarcity is in strong contrast to the growing bandwidth requirements generally and particularly in the military domain. The success of future NATO operations relies more than ever on new real-time services going hand in hand with increased data throughputs as well as with robustness against and compatibility with electronic warfare. Therefore, future tactical communication and electronic warfare technologies must aim at exploiting the spectral resources to the maximum while at the same time providing NATO with an advantage in the tactical environment.
jufoid=73201
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
To help developers during the Scrum planning poker, in our previous work we ran a case study on a Moonlight Scrum process to understand if it is possible to introduce functional size metrics to improve estimation accuracy and to measure the accuracy of expert-based estimation. The results of this original study showed that expert-based estimations are more accurate than those obtained by means of models, calculated with functional size measures. To validate the results and to extend them to plain Scrum processes, we replicated the original study twice, applying an exact replication to two plain Scrum development processes. The results of this replicated study show that the accuracy of the effort estimated by the developers is very accurate and higher than that obtained through functional size measures. In particular, SiFP and IFPUG Function Points, have low predictive power and are thus not help to improve the estimation accuracy in Scrum.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We develop a game-theoretic semantics (GTS) for the fragment ATL+ of the alternating-time temporal logic ATL⁎, thereby extending the recently introduced GTS for ATL. We show that the game-theoretic semantics is equivalent to the standard compositional semantics of ATL+ with perfect-recall strategies. Based on the new semantics, we provide an analysis of the memory and time resources needed for model checking ATL+ and show that strategies of the verifier that use only a very limited amount of memory suffice. Furthermore, using the GTS, we provide a new algorithm for model checking ATL+ and identify a natural hierarchy of tractable fragments of ATL+ that substantially extend ATL.
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper we study the fault tolerance of gene networks. We assume single gene knockouts and investigate the effect this kind of perturbation has on the communication between genes globally. For our study we use directed scale-free networks resembling gene networks, e.g., signaling or proteinprotein interaction networks, and define a Markov process based on the network topology to model communication. This allows us to evaluate the spread of information in the network and, hence, detect differences due to single gene knockouts in the gene-gene communication asymptotically.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, we propose a novel extension of the extreme learning machine (ELM) algorithm for single-hidden layer feedforward neural network training that is able to incorporate subspace learning (SL) criteria on the optimization process followed for the calculation of the network's output weights. The proposed graph embedded ELM (GEELM) algorithm is able to naturally exploit both intrinsic and penalty SL criteria that have been (or will be) designed under the graph embedding framework. In addition, we extend the proposed GEELM algorithm in order to be able to exploit SL criteria in arbitrary (even infinite) dimensional ELM spaces. We evaluate the proposed approach on eight standard classification problems and nine publicly available datasets designed for three problems related to human behavior analysis, i.e., the recognition of human face, facial expression, and activity. Experimental results denote the effectiveness of the proposed approach, since it outperforms other ELM-based classification schemes in all the cases.
Research output: Contribution to journal › Article › Scientific › peer-review
Research output: Contribution to journal › Editorial › Scientific
Research output: Contribution to journal › Editorial › Scientific
Research output: Contribution to journal › Article › Scientific
Robots are widely used in industrial manufacturing processes and play an important role in the enhancement of industrial organizations productivity. One of the major issues that engineers are facing is that, current programming methods are too time-consuming and they lack of intuitiveness use by human users. However, the latest advances in the field of sensors, let manufacturers to develop and produce devices that allow humans to interact with machines in a more intuitive way, reducing the need of additional complex software components, and hence, the required time to establish the aforementioned human-machine interactions. This research work presents an approach for gesture-based on-line programming of industrial robot manipulators. This is achieved by utilizing a combination of devices with a set of integrated, cost-effective visual and bending sensors, in order to precisely track the user's hand position and gestures at system run-time. This continuous tracking allows the robot manipulator to mimic the operator's hand motion. In addition, desired paths performed by a human with expertise on task execution, are translated into robot targets, composing a new robot path, and are stored for later use. Such path can be modified to fit into different robot manufacturers, programming language. Further steps of the presented approach will include the possibility of path optimization by the industrial manipulator itself.
jufoid=72024
INT=atme,"Sylari, Antonios"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Context. In recent years, smells, also referred to as bad smells, have gained popularity among developers. However, it is still not clear how harmful they are perceived from the developers’ point of view. Many developers talk about them, but only few know what they really are, and even fewer really take care of them in their source code. Objective. The goal of this work is to understand the perceived criticality of code smells both in theory, when reading their description, and in practice. Method. We executed an empirical study as a differentiated external replication of two previous studies. The studies were conducted as surveys involving only highly experienced developers (63 in the first study and 41 in the second one). First the perceived criticality was analyzed by proposing the description of the smells, then different pieces of code infected by the smells were proposed, and finally their ability to identify the smells in the analyzed code was tested. Results. According to our knowledge, this is the largest study so far investigating the perception of code smells with professional software developers. The results show that developers are very concerned about code smells in theory, nearly always considering them as harmful or very harmful (17 out of 23 smells). However, when they were asked to analyze an infected piece of code, only few infected classes were considered harmful and even fewer were considered harmful because of the smell. Conclusions. The results confirm our initial hypotheses that code smells are perceived as more critical in theory but not as critical in practice.
Research output: Contribution to journal › Article › Scientific › peer-review
Over the past 20 years, open source has become a widely adopted approach to develop software. Code repositories provide software to power cars, phones, and other things that are considered proprietary. In parallel, proprietary development has evolved from rigid, centralized waterfall approaches to agile, iterative development. In this paper, we share our experiences regarding this co-evolution of open and closed source from the viewpoints of tools, practices, and organizing the development work, concluding that today’s bazaars and cathedrals have much more common characteristics than those that separate them.
EXT="Ahoniemi, Tuukka"
EXT="Lenarduzzi, Valentina"
EXT="Mikkonen, Tommi"
INT=comp,"Jaaksi, Ari"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Context: Since its inception around 2010, gamification has become one of the top technology and software trends. However, gamification has also been regarded as one of the most challenging areas of software engineering. Beyond traditional software design requirements, designing gamification requires the command of disciplines such as (motivational/behavioral) psychology, game design, and narratology, making the development of gamified software a challenge for traditional software developers. Gamification software inhabits a finely tuned niche of software engineering that seeks for both high functionality and engagement; beyond technical flawlessness, gamification has to motivate and affect users. Consequently, it has also been projected that most gamified software is doomed to fail. Objective: This paper seeks to advance the understanding of designing gamification and to provide a comprehensive method for developing gamified software. Method: We approach the research problem via a design science research approach; firstly, by synthesizing the current body of literature on gamification design methods and by interviewing 25 gamification experts, producing a comprehensive list of design principles for developing gamified software. Secondly, and more importantly, we develop a detailed method for engineering of gamified software based on the gathered knowledge and design principles. Finally, we conduct an evaluation of the artifacts via interviews of ten gamification experts and implementation of the engineering method in a gamification project. Results: As results of the study, we present the method and key design principles for engineering gamified software. Based on the empirical and expert evaluation, the developed method was deemed as comprehensive, implementable, complete, and useful. We deliver a comprehensive overview of gamification guidelines and shed novel insights into the nature of gamification development and design discourse. Conclusion: This paper takes first steps towards a comprehensive method for gamified software engineering.
Research output: Contribution to journal › Article › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Wireless standards are evolving rapidly due to the exponential growth in the number of portable devices along with the applications with high data rate requirements. Adaptable software based signal processing implementations for these devices can make the deployment of the constantly evolving standards faster and less expensive. The flagship technology from the IEEE WLAN family, the IEEE 802.11ac, aims at achieving very high throughputs in local area connectivity scenarios. This article presents a software based implementation for the Multiple Input and Multiple Output (MIMO) transmitter and receiver baseband processing conforming to the IEEE 802.11ac standard which can achieve transmission bit rates beyond 1Gbps. This work focuses on the Physical layer frequency domain processing. Various configurations, including 2×2 and 4×4 MIMO are considered for the implementation. To utilize the available data and instruction level parallelism, a DSP core with vector extensions is selected as the implementation platform. Then, the feasibility of the presented software-based solution is assessed by studying the number of clock cycles and power consumption of the different scenarios implemented on this core. Such Software Defined Radio based approaches can potentially offer more flexibility, high energy efficiency, reduced design efforts and thus shorter time-to-market cycles in comparison with the conventional fixed-function hardware methods.
ORG=elt,0.5
ORG=tie,0.5
Research output: Contribution to journal › Article › Scientific › peer-review
The target of this paper is to analyze the impact of variation in antenna radiation pattern on the performance of Single Path Multiple Access (SPMA) in urban/dense-urban environment. For this study, an extended 3GPP antenna model, and 3D building data from an urban area of Helsinki city is used. The simulations are performed at 28 GHz frequency using “sAGA” a MATLAB based 3D ray tracing tool. The variables considered for the series of simulations are Front to Back Ratio (FBR), Side Lobe Level (SLL), and Half Power Beamwidth (HPBW) of an antenna in horizontal and vertical plane. Network performance is compared in terms of metrics like signal strength, SINR, and capacity. This paper also presents the spectral efficiency and power efficiency analysis. The performance of SPMA was found susceptible to the change in antenna radiation pattern, and the simulation results show a significant impact of radiation pattern on the capacity gain offered by SPMA. Interestingly, SPMA was found a fairly power efficient solution with respect to the traditional macro cellular network approach. However, the level of power efficiency heavily depends upon the antenna beamwidth and on other beam parameters.
Research output: Contribution to journal › Article › Scientific › peer-review
Efficient sample rate conversion is of widespread importance in modern communication and signal processing systems. Although many efficient kinds of polyphase filterbank structures exist for this purpose, they are mainly geared toward serial, custom, dedicated hardware implementation for a single task. There is, therefore, a need for more flexible sample rate conversion systems that are resource-efficient, and provide high performance. To address these challenges, we present in this paper an all-software-based, fully parallel, multirate resampling method based on graphics processing units (GPUs). The proposed approach is well-suited for wireless communication systems that have simultaneous requirements on high throughput and low latency. Utilizing the multidimensional architecture of GPUs, our design allows efficient parallel processing across multiple channels and frequency bands at baseband. The resulting architecture provides flexible sample rate conversion that is designed to address modern communication requirements, including real-time processing of multiple carriers simultaneously.
Research output: Contribution to journal › Article › Scientific › peer-review
Context: Software companies seek to gain benefit from agile development approaches in order to meet evolving market needs without losing their innovative edge. Agile practices emphasize frequent releases with the help of an automated toolchain from code to delivery. Objective: We investigate, which tools are used in software delivery, what are the reasons omitting certain parts of the toolchain and what implications toolchains have on how rapidly software gets delivered to customers. Method: We present a multiple-case study of the toolchains currently in use in Finnish software-intensive organizations interested in improving their delivery frequency. We conducted qualitative semi-structured interviews in 18 case organizations from various software domains. The interviewees were key representatives of their organization, considering delivery activities. Results: Commodity tools, such as version control and continuous integration, were used in almost every organization. Modestly used tools, such as UI testing and performance testing, were more distinctly missing from some organizations. Uncommon tools, such as artifact repository and acceptance testing, were used only in a minority of the organizations. Tool usage is affected by the state of current workflows, manual work and relevancy of tools. Organizations whose toolchains were more automated and contained fewer manual steps were able to deploy software more rapidly. Conclusions: There is variety in the need for tool support in different development steps as there are domain-specific differences in the goals of the case organizations. Still, a well-founded toolchain supports speedy delivery of new software.
Research output: Contribution to journal › Article › Scientific › peer-review
The evolution of modern radar is heading toward a networked, multifunctional, adaptive, and cognitive system. The network of software-controllable fast-adapting radars follows a highly complex control and operation logic. It is not straightforward to assess its instantaneous capability to detect, track, and recognize targets. To be able to predict or optimize the system performance, one has to understand its behavior not only on a general level, but also in various operating conditions and considering the target behavior and properties accurately. In this paper, we propose the fusion of radar and tracker recordings with an extensive database of cooperative aircraft navigation recordings and radar cross section data to assess and learn the performance measures for the air surveillance. The main contribution of this paper is the incorporation of the aircraft kinematics, orientation, and radar cross section into an automated measurement-based analysis. We consider the employment of the measurement-based metrics and machine learning in the performance prediction. Simulations and experiments with real-life data demonstrate the feasibility and potential of the proposed concept.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Due to the networked nature of modern industrial business, repeated information exchange activities are necessary. Unfortunately, information exchange is both laborious and expensive with the current communication media, which causes errors and delays. To increase the efficiency of communication, this study introduces an architecture to exchange information in a digitally processable manner in industrial ecosystems. The architecture builds upon commonly agreed business practices and data formats, and an open consortium and information mediators enable it. Following the architecture, a functional prototype has been implemented for a real industrial scenario. This study has its focus on the technical information of equipment, but the architecture concept can also be applied in financing and logistics. Therefore, the concept has potential to completely reform industrial communication.
Research output: Contribution to journal › Article › Scientific › peer-review
Task-based information access is a significant context for studying information interaction and for developing information retrieval (IR) systems. Molecular medicine (MM) is an informationintensive and rapidly growing task domain, which aims at providing new approaches to the diagnosis, prevention and treatment of various diseases. The development of bioinformatics databases and tools has led to an extremely distributed information environment. There are numerous generic and domain-specific tools and databases available for online information access. This renders MM as a fruitful context for research in task-based IR. The present paper examines empirically task-based information access in MM and analyzes task processes as contexts of information access and interaction, integrated use of resources in information access and the limitations of (simple server-side) log analysis in understanding information access, retrieval sessions in particular. We shed light on the complexity of the between-systems interaction. The findings suggest that the system development should not be done in isolation as there is considerable interaction between them in real world use. We also classify system-level strategies of information access integration that can be used to reduce the amount of manual system integration by task performers.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Location based and geo-context aware services form the new fast growing domain of commercially successful ICT solutions. These services play the key role in IoT scenarios and development of smart spaces and proactive solutions. One of the most attractive application areas is e-Tourism. More people can afford travelling and over the last few decades we see continues growth of the tourist activity. At the same time we see huge increase of demand both in quantity and quality of tourist services. Many experts foresee that this growth cannot any longer be fulfilled by applying traditional approaches. Similarly to the change in tickets and hotel booking, it is expected that soon we will witness major transformation in the whole industry towards e-Tourism driven market, where roles of traditional service providers, e.g., tourist agents, guides, will disappear or seriously changed. Internet of Things (IoT) is an integral part of the Future Internet ecosystem that has major impact on development of e-Tourism services. IoT provides an infrastructure to uniquely identify and link physical objects with virtual representations. As a result any physical object can have virtual reflection in the service space. This gives an opportunity to replace actions on physical objects by operations on their virtual reflections, which is faster, cheaper and more comfortable for the user. In this paper we summarize our research in the field, share ideas of innovative e-Tourism services and present Geo2Tag LBS platform that allows easy and fast development of such services.
EXT="Balandin, Sergey"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Organizations often adopt enterprise architecture (EA) when planning how best to develop their information technology (IT) or businesses, for strategic management, or generally for managing change initiatives. This variety of different uses affects many stakeholders within and between organizations. Because stakeholders have dissimilar backgrounds, positions, assumptions, and activities, they respond differently to changes and the potential problems that emerge from those changes. This situation creates contradictions and conflicts between stakeholders that may further influence project activities and ultimately determine how EA is adopted. In this paper, we examine how institutional pressures influence EA adoption. Based on a qualitative case study of two cases, we show how regulative, normative, and cognitive pressures influence stakeholders’ activities and behaviors during the process of EA adoption. Our contribution thus lies in identifying roles of institutional pressures in different phases during the process of EA adoption and how it changes overtime. The results provide insights into EA adoption and the process of institutionalization, which help to explain emergent challenges in EA adoption.
EXT="Dang, Duong"
Research output: Contribution to journal › Article › Scientific › peer-review
Dataflow modeling offers a myriad of tools for designing and optimizing signal processing systems. A designer is able to take advantage of dataflow properties to effectively tune the system in connection with functionality and different performance metrics. However, a disparity in the specification of dataflow properties and the final implementation can lead to incorrect behavior that is difficult to detect. This motivates the problem of ensuring consistency between dataflow properties that are declared or otherwise assumed as part of dataflow-based application models, and the dataflow behavior that is exhibited by implementations that are derived from the models. In this paper, we address this problem by introducing a novel dataflow validation framework (DVF) that is able to identify disparities between an application’s formal dataflow representation and its implementation. DVF works by instrumenting the implementation of an application and monitoring the instrumentation data as the application executes. This monitoring process is streamlined so that DVF achieves validation without major overhead. We demonstrate the utility of our DVF through design and implementation case studies involving an automatic speech recognition application, a JPEG encoder, and an acoustic tracking application.
Research output: Contribution to journal › Article › Scientific › peer-review
As the variety of off-the-shelf processors expands, traditional implementation methods of systems for digital signal processing and communication are no longer adequate to achieve design objectives in a timely manner. There is a necessity for designers to easily track the changes in computing platforms, and apply them efficiently while reusing legacy code and optimized libraries that target specialized features in single processing units. In this context, we propose an integration workflow to schedule and implement Software Defined Radio (SDR) protocols that are developed using the GNU Radio environment on heterogeneous multiprocessor platforms. We show how to utilize Single Instruction Multiple Data (SIMD) units provided in Graphics Processing Units (GPUs) along with vector accelerators implemented in General Purpose Processors (GPPs). We augment a popular SDR framework (i.e, GNU Radio) with a library that seamlessly allows offloading of algorithm kernels mapped to the GPU without changing the original protocol description. Experimental results show how our approach can be used to efficiently explore design spaces for SDR system implementation, and examine the overhead of the integrated backend (software component) library.
Research output: Contribution to journal › Article › Scientific › peer-review
The agricultural sector in Finland has been lagging behind in digital development. Development has long been based on increasing production by investing in larger machines. Over the past decade, change has begun to take place in the direction of digitalization. One of the challenges is that different manufacturers are trying to get farmers' data on their own closed cloud services. In the worst case, farmers may lose an overall view of their farms and opportunities for deeper data analysis because their data is located in different services. The goals and previously studied challenges of the 'MIKÄ DATA' project are described in this research. This project will build an intelligent data service for farmers, which is based on the Oskari platform. In the 'Peltodata' service, farmers can see their own field data and many other data sources layer by layer. The project is focused on the study of machine learning techniques to develop harvest yield prediction and find out the correlation between many data sources. The 'Peltodata' service will be ready at the end of 2019.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In the Internet of Things (IoT), machines and devices are equipped with sensors and Internet connections that makes it possible to collect data and store this data to cloud services. In vocational education and training, the stored data can be used to improve decision-making processes. With the help of this data, a teacher can also get a more accurate picture of the current state of the education environment than before. IoT should be integrated into vocational education and training because IoT will help to achieve important educational objectives. IoT is able to promote students' preparation for working life, the safety of education environment, self-directed learning, and effective learning. It can also improve the efficient use of educational resources. In additional, IoT based solutions should be introduced so that students would have a vision of new types of IoT skill requirements before they enter the labour market. In this paper, we presents IoT related aspects that enable to meet the above-mentioned educational objectives. By implementing a pilot project, we aim to concretise IoT's possibilities in the education sector.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Chapter › Scientific › peer-review
Interpretation of ambiguous images perceived visually and relying on supplementary information coordinated with pictorial cues was selected to evaluate the usefulness of the StickGrip device. The ambiguous visual models were achromatic images composed from only two overlapping ellipses with various brightness gradients and relative position of the components. Inspection of images by the tablet pen enhanced with the pencil-like visual pointer decreased discrepancy between their actual interpretation and expected decision by only about 2.6 for concave and by about 1.3 for convex models. Interpretation of the convex images ambiguous with their inverted concave counterparts inspected by the StickGrip device achieved three times less discrepancy between decisions made and expected. Interpretation of the concave images versus inverted convex counterparts was five times more accurate with the use of the StickGrip device. We conclude that the kinesthetic and proprioceptive cues delivered by the StickGrip device had a positive influence on the decision-making under ambiguous conditions.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The boundary between hedonic and utilitarian information systems has become increasingly blurred during recent years due to the rise of developments such as gamification. Therefore, users may perceive the purpose of the same system differently, ranging from pure utility to pure play. However, in literature that addresses why people adopt and use information systems, the relationship between the users conception of the purpose of the system, and their experience and use of it has not yet been investigated. Therefore, in this study we investigate the interaction effects between users’ utility-fun conceptions of the system and the perceived enjoyment and usefulness from its use, on their post-adoption intentions (continued use, discontinued use, and contribution). We employ survey data collected among users (N = 562) of a gamified crowdsourcing application that represents a system affording both utility and leisure use potential. The results show that the more fun-oriented users conceive the system to be, the more enjoyment affects continued and discontinued use intentions, and the less ease of use affects the continued use intention. Therefore, users’ conceptions of the system prove to be an influential aspect of system use and should particularly be considered when designing modern multi-purposed systems such as gamified information systems.
Research output: Contribution to journal › Article › Scientific › peer-review
We investigate how IT-capability leads to more interaction business practices, both through inter-organizational systems (IOS) and social media (SM), and how they further lead to marketing effectiveness and firm success. After analyzing the data collected from manufacturers (N=504), we find that (1) IT capability has a significant positive effect on both IOS-enabled and SM-enabled interaction practices; (2) IOS-enabled interaction practice has significant positive effects on both marketing performance and financial performance, while SM-enabled interaction practice only has a significant positive effect on the market performance; (3) both IOS-enabled interaction practice and SM-enabled interaction practice partly mediate the positive influence of IT capability on marketing performance and financial performance; (4) marketing performance partly mediates the positive impact of IOS-enabled interaction practice and fully mediates the positive impact of SM-enabled interaction practice on financial performance.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Information technology (IT) engagement is defined as a need to spend more time using IT. Practice-based examples show that IT engagement can have adverse effects in organizations. Although users can potentially get more work done through IT engagement, observations show that the users might jeopardize their well-being and hamper their work performance. We aimed to investigate this complexity in the research on IT engagement by examining its potential antecedents and outcomes in organizations. Considering the potentially mixed outcomes, we developed a model to examine the effects of IT engagement on personal productivity and strain. We also aimed to explain the antecedents of IT engagement by drawing on the collective expectations for IT use. In particular, we examined the extent to which normative pressure on IT use drives users’ information load and IT engagement. Finally, we sought to understand whether users’ attempts to avert dependency on IT use reduced their IT engagement. Several hypotheses were developed and tested with survey data of 1091 organizational IT users. The findings help explain the role of normative pressure as a key driver of IT engagement and validate the positive and negative outcomes of IT engagement in organizations.
EXT="Makkonen, Markus"
Research output: Contribution to journal › Article › Scientific › peer-review
This paper introduces a novel multicore scheduling method that leverages a parameterized dataflow Model of Computation (MoC). This method, which we have named Just-In-Time Multicore Scheduling (JIT-MS), aims to efficiently schedule Parameterized and Interfaced Synchronous DataFlow (PiSDF) graphs on multicore architectures. This method exploits features of PiSDF to And locally static regions that exhibit predictable communications. This paper uses a multicore signal processing benchmark to demonstrate that the JIT-MS scheduler can exploit more parallelism than a conventional multicore task scheduler based on task creation and dispatch. Experimental results of the JIT-MS on an 8-core Texas Instruments Keystone Digital Signal Processor (DSP) are compared with those obtained from the OpenMP implementation provided by Texas Instruments. Results shows latency improvements of up to 26% for multicore signal processing systems.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In order to stay competitive in the global market, industrial manufacturers are implementing various methods to improve the production processes. This requires measuring important metrics and making use of performance measurement systems. Based on the data generated in manufacturing operations, various indicators can be defined and measured. These indicators serve as the basis for decision-making, control and health monitoring of a manufacturing process. In this paper an approach is presented that makes use of key performance indicators (KPIs). The KPIs used are defined in a standard known as, ISO 22400 Automation systems and integration-Key performance indicators (KPIs) that is usually applied for management of manufacturing operations. The approach uses the database of a production line to define KPIs and generates a tool for visualizing them. The KPIs are defined using a data model of Key Performance Indicator Markup Language (KPI-ML), which is an XML utilization of the ISO 22400 standard. The recommended approach paves a way for constructing generic KPI-ML visualization tools serving various industries to assess their performance with the same tool.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The Advisory Board for Junior Scientific Staff (WiN) of the German Informatics Society (GI) calls for and recommends measures to improve the situation of doctoral candidates and post-doctoral researchers in computer science and other technical sciences. Doctoral and postdoctoral scientists in the field of computer science are increasingly affected by the complex structural and financial problems of academia. The bottleneck on the way to a professorship leads to precarious employment conditions in academic careers. The difficulty in combining family and academic careers creates an additional disadvantage, especially for female scientists. A lack of quality assurance and reliable and transparent decision-making processes make it difficult to identify and deal with conflicts during the doctoral and postdoctoral period. Misguided incentives in the academic system impair the direct, intensive and regular supervision of early career researchers. Timely coping with the challenges outlined in this paper is of central importance for the future survival of university research institutions and for a successful continuation of the principle of best selection. In addition to measures already planned and implemented to empower early career researchers, this paper outlines concrete measures to improve supervision during the doctoral phase, and to structure and create further career paths in academia.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper offers blueprints for and reports upon three years experience from teaching the university course “Lean Software Startup” for information technology and economics students. The course aims to give a learning experience on ideation/innovation and subsequent product and business development using the lean startup method. The course educates the students in software business, entrepreneurship, teamwork and the lean startup method. The paper describes the pedagogical design and practical implementation of the course in sufficient detail to serve as an example of how entrepreneurship and business issues can be integrated into a software engineering curriculum. The course is evaluated through learning diaries and a questionnaire, as well as the primary teacher’s learnings in the three course instances. We also examine the course in the context of CDIO and show its connection points to this broader engineering education framework. Finally we discuss the challenges and opportunities of engaging students with different backgrounds in a hands-on entrepreneurial software business course.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper presents a lossless compression method performing separately the compression of the vessels and of the remaining part of eye fundus in retinal images. Retinal images contain valuable information sources for several distinct medical diagnosis tasks, where the features of interest can be e.g. the cotton wool spots in the eye fundus, or the volume of the vessels over concentric circular regions. It is assumed that one of the existent segmentation methods provided the segmentation of the vessels. The proposed compression method transmits losslessly the segmentation image, and then transmits the eye fundus part, or the vessels image, or both, conditional on the vessels segmentation. The independent compression of the two color image segments is performed using a sparse predictive method. Experiments are provided over a database of retinal images containing manual and estimated segmentations. The codelength of encoding the overall image, including the segmentation and the image segments, proves to be better than the codelength for the entire image obtained by JPEG2000 and other publicly available compressors.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Wireless sensor networks (WSNs) are being deployed at an escalating rate for various application fields. The ever growing number of application areas requires a diverse set of algorithms with disparate processing needs. WSNs also need to adapt to prevailing energy conditions and processing requirements. The preceding reasons rule out the use of a single fixed design. Instead, a general purpose design that can rapidly be adapted to different conditions and requirements is desired. In lieu of the traditional inflexible wireless sensor node consisting of a separate micro-controller, radio transceiver, sensor array and energy storage, we propose a unified rapidly reconfigurable miniature sensor node, implemented with a transport triggered architecture processor on a low-power Flash FPGA. To our knowledge, this is the first study of its kind. The proposed approach does not solely concentrate on energy efficiency but a high emphasis is also put on the ease of development perspective. Power consumption and silicon area usage comparison based on solutions implemented using our novel rapid design approach for wireless sensor nodes are performed. The comparison is performed between 16-bit fixed point, 16-bit floating point and 32-bit floating point implementations. The implemented processors and algorithms are intended for rolling bearing condition monitoring, but can be fully extended for other applications as well.
Research output: Contribution to journal › Article › Scientific › peer-review
The target of this article is to analyze the impact of transition from cellular frequency band i.e. 2.1 GHz to Millimeter Wave (mmWave) frequency band i.e. 28 GHz. A three dimensional ray tracing tool “sAGA” was used to evaluate the performance of the macro cellular network in urban/dense-urban area of the Helsinki city. A detailed analysis of user experience in terms of signal strength and signal quality for outdoor and indoor users is presented. Indoor users at different floors are separately studied in this paper. It is found that in spite of considering high system gain at 28 GHz the mean received signal power is reduced by almost 16.5 dB compared with transmission at 2.1 GHz. However, the SINR is marginally changed at higher frequency. Even with 200 MHz system bandwidth at 28 GHz, no substantial change is witnessed in signal quality for the outdoor and upper floor indoor users. However, the users at lower floors show some sign of degradation in received signal quality with 200 MHz bandwidth. Moreover, it is also emphasized that mobile operators should take benefit of un-utilized spectrum in the mmWave bands. In short, this paper highlights the potential and the gain of mmWave communications.
Research output: Contribution to journal › Article › Scientific › peer-review
In recent years, parameterized dataflow has evolved as a useful framework for modeling synchronous and cyclo-static graphs in which arbitrary parameters can be changed dynamically. Parameterized dataflow has proven to have significant expressive power for managing dynamics of DSP applications in important ways. However, efficient hardware synthesis techniques for parameterized dataflow representations are lacking. This paper addresses this void; specifically, the paper investigates efficient field programmable gate array (FPGA)-based implementation of parameterized cyclo-static dataflow (PCSDF) graphs. We develop a scheduling technique for throughput-constrained minimization of dataflow buffering requirements when mapping PCSDF representations of DSP applications onto FPGAs. The proposed scheduling technique is integrated with an existing formal schedule model, called the generalized schedule tree, to reduce schedule cost. To demonstrate our new, hardware-oriented PCSDF scheduling technique, we have designed a real-time base station emulator prototype based on a subset of long-term evolution (LTE), which is a key cellular standard.
Research output: Contribution to journal › Article › Scientific › peer-review
This is a data descriptor paper for a set of the battery output data measurements during the turned on display discharge process caused by the execution of modern mobile blockchain projects on Android devices. The measurements were executed for Proof-of-Work (PoW) and Proof-of-Activity (PoA) consensus algorithms. In this descriptor, we give examples of Samsung Galaxy S9 operation while a broader range of measurements is available in the dataset. Examples provide the data about battery output current, output voltage, temperature, and status. We also show the measurements obtained utilizing short-range (IEEE 802.11n) and cellular (LTE) networks. This paper describes the proposed dataset and the method employed to gather the data. To provide a further understanding of the dataset’s nature, an analysis of the collected data is also briefly presented. This dataset may be of interest to both researchers from information security and human–computer interaction fields and industrial distributed ledger/blockchain developers.
INT=elen,"Bardinova, Yulia"
EXT="Zhidanov, Konstantin"
EXT="Komarov, Mikhail"
Research output: Contribution to journal › Article › Scientific › peer-review
In the field of cryptography engineering, implementation-based attacks are a major concern due to their proven feasibility. Fault injection is one attack vector, nowadays a major research line. In this paper, we present how a memory tampering-based fault attack can be used to severely limit the output space of binary GCD based modular inversion algorithm implementations. We frame the proposed attack in the context of ECDSA showing how this approach allows recovering the private key from only one signature, independent of the key size. We analyze two memory tampering proposals, illustrating how this technique can be adapted to different implementations. Besides its application to ECDSA, it can be extended to other cryptographic schemes and countermeasures where binary GCD based modular inversion algorithms are employed. In addition, we describe how memory tampering-based fault attacks can be used to mount a previously proposed fault attack on scenarios that were initially discarded, showing the importance of including memory tampering attacks in the frameworks for analyzing fault attacks and their countermeasures.
Research output: Contribution to journal › Article › Scientific › peer-review
To measure the impact of transport projects in smart cities can be expensive and time-consuming. One challenge in measuring the effect of these projects is that impacts are poorly quantified or are not always immediately tangible. Due to transport projects nature, it is often difficult to show results in short term because much of the effort is invested in changing attitudes and behaviour on the mobility choices of city inhabitants. This paper presents a methodology that was developed to evaluate and define city transport projects for increasing energy efficiency. The main objective of this methodology is to help city authorities to improve the energy efficiency of the city by defining strategies and taking actions in the transportation domain. In order to define it, a review of current methodologies for measuring the impact of energy efficiency projects was performed. The defined energy efficiency methodology provides standard structure to the evaluation process, making sure that each project is being evaluated against its own goals and as detailed as it is required to the level of investment. An implementation in a smart city of the first step of this methodology is included in order to evaluate the implementation phase of the defined process.
AUX=ase,"Mantilla R., M. Fernanda"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Startups operate with small resources in time pressure. Thus, building minimal product versions to test and validate ideas has emerged as a way to avoid wasteful creation of complicated products which may be proven unsuccessful in the markets. Often, design of these early product versions needs to be done fast and with little advance information from end-users. In this paper we introduce the Minimum Viable User eXperience (MVUX) that aims at providing users a good enough user experience already in the early, minimal versions of the product. MVUX enables communication of the envisioned product value, gathering of meaningful feedback, and it can promote positive word of mouth. To understand what MVUX consists of, we conducted an interview study with 17 entrepreneurs from 12 small startups. The main elements of MVUX recognized are Attractiveness, Approachability, Professionalism, and Selling the Idea. We present the structured framework and elements’ contributing qualities.
jufoid=71106
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Today, the number of interconnected Internet of Things (IoT) devices is growing tremendously followed by an increase in the density of cellular base stations. This trend has an adverse effect on the power efficiency of communication, since each new infrastructure node requires a significant amount of energy. Numerous enablers are already in place to offload the scarce cellular spectrum, thus allowing utilization of more energy-efficient short-range radio technologies for user content dissemination, such as moving relay stations and network-assisted direct connectivity. In this work, we contribute a new mathematical framework aimed at analyzing the impact of network offloading on the probabilistic characteristics related to the quality of service and thus helping relieve the energy burden on infrastructure network deployments.
Research output: Contribution to journal › Article › Scientific › peer-review
There are two simultaneous transformative changes occuring in Education: the use of mobile and tablet devices for accessing educational content, and the rise of the MOOCs. Happening independently and in parallel are significant advances in interaction technologies through smartphones and tablets, and the rise in the use of social-media and social-network analytics in several domains. Given the extent of personal context that is available on the mobile device, how can the education experience be personalised, made social, and tailored to the cultural context of the learner? The goal of this proposal is twofold: (a) To understand the usage, and student behaviour in this new environment (MOOCS and mobile devices) and (b) To design experiments and implement them to make these new tools more effective by tailoring them to the individual student's personal, social and cultural settings and preferences.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper presents a model-based design method and a corresponding new software tool, the HTGS Model-Based Engine (HMBE), for designing and implementing dataflow-based signal processing applications on multi-core architectures. HMBE provides complementary capabilities to HTGS (Hybrid Task Graph Scheduler), a recently-introduced software tool for implementing scalable workflows for high performance computing applications on compute nodes with high core counts and multiple GPUs. HMBE integrates model-based design approaches, founded on dataflow principles, with advanced design optimization techniques provided in HTGS. This integration contributes to (a) making the application of HTGS more systematic and less time consuming, (b) incorporating additional dataflow-based optimization capabilities with HTGS optimizations, and (c) automating significant parts of the HTGS-based design process using a principled approach. In this paper, we present HMBE with an emphasis on the model-based design approaches and the novel dynamic scheduling techniques that are developed as part of the tool. We demonstrate the utility of HMBE via two case studies: an image stitching application for large microscopy images and a background subtraction application for multispectral video streams.
Research output: Contribution to journal › Article › Scientific › peer-review
Cyber-attacks have grown in importance to become a matter of national security. A growing number of states and organisations around the world have been developing defensive and offensive capabilities for cyber warfare. Security criteria are important tools for defensive capabilities of critical communications and information systems (CIS). Various criteria have been developed for designing, implementing and auditing CIS. However, the development of criteria is inadequately supported by currently available guidance. The relevant guidance is mostly related to criteria selection. The abstraction level of the guidance is high. This may lead to inefficient criteria development work. In addition, the resulting criteria may not fully meet their goals. To ensure efficient criteria development, the guidance should be supported with concrete level implementation guidelines. This paper proposes a model for efficient development of security audit criteria. The model consists of criteria design goals and concrete implementation guidelines to achieve these goals. The model is based on the guidance given by ISACA and on the criteria development work by FICORA, the Finnish Communications Regulatory Authority. During the years 2008-2017, FICORA has actively participated in development and usage of three versions of Katakri, the Finnish national security audit criteria. The paper includes a case study that applies the model to existing security criteria. The case study covers a review of the criteria composed of the Finnish VAHTI-instructions. During the review, all supported design goals and implementation guidelines of the model were scrutinised. The results of the case study indicate that the model is useful for reviewing existing criteria. The rationale is twofold. First, several remarkable shortcomings were identified. Second, the identification process was time-efficient. The results also suggest that the model would be useful for criteria under development. Addressing the identified shortcomings during the development phase would have made the criteria more efficient, usable and understandable.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
As the capabilities of Unmanned Aerial Systems (UASs) evolve, their novel and demanding applications emerge, which require improved capacity and reduced latency. Millimeter-wave (mmWave) connections are particularly attractive for UASs due to their predominantly line-of-sight regime and better signal locality. In this context, understanding the interactions between the environment, the flight dynamics, and the beam tracking capabilities is a challenge that has not been resolved by today's simulation environments. In this work, we develop the means to model these crucial considerations as well as provide the initial insights into the performance of mmWave-based UAS communications made available with the use of our proposed platform.
jufoid=57486
INT=elen,"Godbole, Tanmay Ram"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The decreasing prices of monitoring equipment have vastly increased the opportunities to utilize local data, and data processing for wider global web-based monitoring purposes. The possible amount of data flowing though different levels can be huge. Now the question is how to handle this opportunity in both dynamic and secure way. The paper presents a new concept to manage data for monitoring through the Internet. The concept is based on the use of Arrowhead Framework (AF) and MIMOSA data model, and selected edge, and gateway devices together with cloud computing opportunities. The concept enables the flexible and secure orchestration of run-time data sources and the utilization of computational services for various process and condition monitoring needs.
EXT="Barna, Laurentiu"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The advent of academic social networking sites (ASNS) has offered an unprecedented opportunity for scholars to obtain peer support online. However, little is known about the characteristics that make questions and answers popular among scholars on ASNS. Focused on the statements embedded in questions and answers, this study strives to explore the precursors that motivate scholars to respond, such as reading, following, or recommending a question or an answer. We collected empirical data from ResearchGate and coded the data via the act4teams coding scheme. Our analysis revealed a threshold effect—when the length of question description is over circa 150 words, scholars would quickly lose interest and thus not read the description. In addition, we found that questions, including positive action-oriented statements, are more likely to entice subsequent reads from other scholars. Furthermore, scholars prefer to recommend an answer with positive procedural statements or negative action-oriented statements.
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper, we present a novel method aiming at multidimensional sequence classification. We propose a novel sequence representation, based on its fuzzy distances from optimal representative signal instances, called statemes. We also propose a novel modified clustering discriminant analysis algorithm minimizing the adopted criterion with respect to both the data projection matrix and the class representation, leading to the optimal discriminant sequence class representation in a low-dimensional space, respectively. Based on this representation, simple classification algorithms, such as the nearest subclass centroid, provide high classification accuracy. A three step iterative optimization procedure for choosing statemes, optimal discriminant subspace and optimal sequence class representation in the final decision space is proposed. The classification procedure is fast and accurate. The proposed method has been tested on a wide variety of multidimensional sequence classification problems, including handwritten character recognition, time series classification and human activity recognition, providing very satisfactory classification results.
Research output: Contribution to journal › Article › Scientific › peer-review
We construct multidimensional interpolating tensor product multiresolution analyses (MRA's) of the function spaces C<inf>0</inf>(R<sup>n</sup>,K), K = R or K = C, consisting of real or complex valued functions on R<sup>n</sup> vanishing at infinity and the function spaces Cu(R<sup>n</sup>,K) consisting of bounded and uniformly continuous functions on R<sup>n</sup>. We also construct an interpolating dual MRA for both of these spaces. The theory of the tensor products of Banach spaces is used. We generalize the Besov space norm equivalence from the one-dimensional case to our n-dimensional construction.
Research output: Contribution to journal › Article › Scientific › peer-review
Visible light communication (VLC) is a recent proposed paradigm of optical wireless communication, in which the visible electromagnetic radiation is used for data transmission. The visible part of the spectrum occupies the frequency range from 400 THz to 800 THz, which is 10,000 times greater than the radio frequency (RF) band. Therefore, its exceptional characteristics render it a promising solution to support and complement traditional RF communication systems, and also overcome the currently witnessed scarcity of radio spectrum resources. To this end, in the last few years, there has been a rapid interest in multi-user processing techniques in VLC. Motivated by this, in this paper, we present a comprehensive and up-to-date survey on the integration of multiple-input multiple-output systems, multi-carrier modulations and multiple access techniques in the context of VLC.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
While single-view human action recognition has attracted considerable research study in the last three decades, multi-view action recognition is, still, a less exploited field. This paper provides a comprehensive survey of multi-view human action recognition approaches. The approaches are reviewed following an application-based categorization: methods are categorized based on their ability to operate using a fixed or an arbitrary number of cameras. Finally, benchmark databases frequently used for evaluation of multi-view approaches are briefly described.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Web crawlers are essential to many Web applications, such as Web search engines, Web archives, and Web directories, which maintain Web pages in their local repositories. In this paper, we study the problem of crawl scheduling that biases crawl ordering toward important pages. We propose a set of crawling algorithms for effective and efficient crawl ordering by prioritizing important pages with the well-known PageRank as the importance metric. In order to score URLs, the proposed algorithms utilize various features, including partial link structure, inter-host links, page titles, and topic relevance. We conduct a large-scale experiment using publicly available data sets to examine the effect of each feature on crawl ordering and evaluate the performance of many algorithms. The experimental results verify the efficacy of our schemes. In particular, compared with the representative RankMass crawler, the FPR-title-host algorithm reduces computational overhead by a factor as great as three in running time while improving effectiveness by 5% in cumulative PageRank.
Research output: Contribution to journal › Article › Scientific › peer-review
An LTS operator can be constructed from a set of LTS operators up to an equivalence if and only if there is an LTS expression that only contains operators from the set and whose result is equivalent to the result of the operator. In this publication this idea is made precise in the context where each LTS has an alphabet of its own and the operators may depend on the alphabets. Then the extent to which LTS operators are constructible is studied. Most, if not all, established LTS operators have the property that each trace of the result arises from the execution of no more than one trace of each of its argument LTSs, and similarly for infinite traces. All LTS operators that have this property and satisfy some other rather weak regularity properties can be constructed from parallel composition and hiding up to the equivalence that compares the alphabets, traces, and infinite traces of the LTSs. Furthermore, a collection of other miscellaneous constructibility and unconstructibility results is presented.
Research output: Contribution to journal › Article › Scientific › peer-review
Physical location of data in cloud storage is an increasingly urgent problem. In a short time, it has evolved from the concern of a few regulated businesses to an important consideration for many cloud storage users. One of the characteristics of cloud storage is fluid transfer of data both within and among the data centres of a cloud provider. However, this has weakened the guarantees with respect to control over data replicas, protection of data in transit and physical location of data. This paper addresses the lack of reliable solutions for data placement control in cloud storage systems. We analyse the currently available solutions and identify their shortcomings. Furthermore, we describe a high-level architecture for a trusted, geolocation-based mechanism for data placement control in distributed cloud storage systems, which are the basis of an on-going work to define the detailed protocol and a prototype of such a solution. This mechanism aims to provide granular control over the capabilities of tenants to access data placed on geographically dispersed storage units comprising the cloud storage.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Context. Among the static analysis tools available, SonarQube is one of the most used. SonarQube detects Technical Debt (TD) items—i.e., violations of coding rules—and then estimates TD as the time needed to remedy TD items. However, practitioners are still skeptical about the accuracy of remediation time estimated by the tool. Objective. In this paper, we analyze both diffuseness of TD items and accuracy of remediation time, estimated by SonarQube, to fix TD items on a set of 21 open-source Java projects. Method. We designed and conducted a case study where we asked 81 junior developers to fix TD items and reduce the TD of 21 projects. Results. We observed that TD items are diffused in the analyzed projects and most items are code smells. Moreover, the results point out that the remediation time estimated by SonarQube is inaccurate and, as compared to the actual time spent to fix TD items, is in most cases overestimated. Conclusions. The results of our study are promising for practitioners and researchers. The former can make more aware decisions during project execution and resource management, the latter can use this study as a starting point for improving TD estimation models.
EXT="Lenarduzzi, Valentina"
Research output: Contribution to journal › Article › Scientific › peer-review
In-band full-duplex (FD) operation can be regarded as one of the greatest discoveries in civilian/commercial wireless communications so far in this century. The concept is significant because it can as much as double the spectral efficiency of wireless data transmission by exploiting the new-found capability for simultaneous transmission and reception (STAR) that is facilitated by advanced self-interference cancellation (SIC) techniques. As the first of its kind, this paper surveys the prospects of exploiting the emerging FD radio technology in military communication applications as well. In addition to spectrally efficient two-way data transmission, the STAR capability could give a major technical advantage for armed forces by allowing their radio transceivers to conduct electronic warfare at the same time when they are also receiving or transmitting information signals at the same frequency band. After providing a detailed introduction to FD transceiver architectures and SIC requirements in military communications, this paper outlines and analyzes some potential defensive and offensive applications of the STAR capability.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Open Source Software (OSS) communities do not often invest in marketing strategies to promote their products in a competitive way. Even the home pages of the web portals of well-known OSS products show technicalities and details that are not relevant for a fast and effective evaluation of the product's qualities. So, final users and even developers, who are interested in evaluating and potentially adopting an OSS product, are often negatively impressed by the quality perception they have from the web portal of the product and turn to proprietary software solutions or fail to adopt OSS that may be useful in their activities. In this paper, we define an evaluation model and we derive a checklist that OSS developers and web masters can use to design their web portals with all the contents that are expected to be of interest for OSS final users. We exemplify the use of the model by applying it to the Apache Tomcat web portal and we apply the model to 22 well-known OSS portals.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Open Source Software (OSS) communities do not often invest in marketing strategies to promote their products in a competitive way. Even the home pages of the web portals of well-known OSS products show technicalities and details that are not relevant for a fast and effective evaluation of the product's qualities. So, final users and even developers who are interested in evaluating and potentially adopting an OSS product are often negatively impressed by the quality perception they have from the web portal of the product and turn to proprietary software solutions or fail to adopt OSS that may be useful in their activities. In this paper, we define OP2A, an evaluation model and we derive a checklist that OSS developers and web masters can use to design (or improve) their web portals with all the contents that are expected to be of interest for OSS final users. We exemplify the use of the model by applying it to the Apache Tomcat web portal and we apply the model to 47 web sites of well-known OSS products to highlight the current deficiencies that characterize these web portals.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In recent years, several countries have placed strong emphasis on openness, especially open data, which can be shared and further processed into various applications. Based on studies, the majority of open data providers are government organizations. This study presents two cases in which the data providers are companies. The cases are analyzed using a framework for open data based business models derived from the literature and several case studies. The analysis focuses on the beginning of the data value chain. As a result, the study highlights the role of data producers in the ecosystem, which has not been the focus in current frameworks.
INT=tie,"Mäkinen, T."
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The social impact of games to players and developers, software quality and game labour are cornerstones of a software game production model. Openness is, naturally, a significant factor for games evolution, overall acceptance and success. The paper authors focus on exploring these issues within the proprietary (closed) and non-proprietary (free/open) source types of software development. The authors identify developmental strengths and weaknesses for the (i) game evolution; (ii) game developers and (iii) game players. The main focus of the paper is on development that is done after the first release of a game with the help of add-ons. Concluding, there are suggestions for a more open and collaborative thinking and acting process model of game evolution that could benefit both types of development and all stakeholders involved. This process can integrate quality features from open and traditional development suitable for game construction.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Industrial information systems record and store data about the status and use of the complex underlying production systems and processes. These data can be analyzed to improve existing, and innovate new products, processes, and services. This work focuses on a relatively unexplored area of industrial data analytics - understanding of end-user behaviors and their implications to the design, implementation, training and servicing of industrial systems. We report the initial findings from a requirements gathering workshop conducted with industry participants to identify the expected opportunities and goals with logged usage data and related needs to support the aims. Our key contributions include a characterization of the types of data that need to be collected and visualized, how these data can be used to understand product usage, description of the business purposes the information can be used for, and experience goals to guide the development of a novel usage data analytics tool. Interesting future research direction could include the privacy issues related to using logged usage data when limited number of users are logged.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Professional
Multirate filter banks can be implemented efficiently using fast-convolution (FC) processing. The main advantage of the FC filter banks (FC-FB) compared with the conventional polyphase implementations is their increased flexibility, that is, the number of channels, their bandwidths, and the center frequencies can be independently selected. In this paper, an approach to optimize the FC-FBs is proposed. First, a subband representation of the FC-FB is derived. Then, the optimization problems are formulated with the aid of the subband model. Finally, these problems are conveniently solved with the aid of a general nonlinear optimization algorithm. Several examples are included to demonstrate the proposed overall design scheme as well as to illustrate the efficiency and the flexibility of the resulting FC-FB.
Research output: Contribution to journal › Article › Scientific › peer-review
The limitations of state-of-the-art cellular modems prevent achieving low-power and low-latency Machine Type Communications (MTC) based on current power saving mechanisms alone. Recently, the concept of wake-up scheme has been proposed to enhance battery lifetime of 5G devices, while reducing the buffering delay. The existing wake-up algorithms use static operational parameters that are determined by the radio access network at the start of the userâ™s session. In this paper, the average power consumption of the wake-up enabled MTC UE is modeled by using a semi-Markov process and then optimized through a delay-constrained optimization problem, by which the optimal wake-up cycle is obtained in closed form. Numerical results show that the proposed solution reduces the power consumption of an optimized Discontinuous Reception (DRX) scheme by up to 40% for a given delay requirement.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Outliers are samples that are generated by different mechanisms from other normal data samples. Graphs, in particular social network graphs, may contain nodes and edges that are made by scammers, malicious programs or mistakenly by normal users. Detecting outlier nodes and edges is important for data mining and graph analytics. However, previous research in the field has merely focused on detecting outlier nodes. In this article, we study the properties of edges and propose effective outlier edge detection algorithm. The proposed algorithms are inspired by community structures that are very common in social networks. We found that the graph structure around an edge holds critical information for determining the authenticity of the edge. We evaluated the proposed algorithms by injecting outlier edges into some real-world graph data. Experiment results show that the proposed algorithms can effectively detect outlier edges. In particular, the algorithm based on the Preferential Attachment Random Graph Generation model consistently gives good performance regardless of the test graph data. More important, by analyzing the authenticity of the edges in a graph, we are able to reveal underlying structure and properties of a graph. Thus, the proposed algorithms are not limited in the area of outlier edge detection. We demonstrate three different applications that benefit from the proposed algorithms: (1) a preprocessing tool that improves the performance of graph clustering algorithms; (2) an outlier node detection algorithm; and (3) a novel noisy data clustering algorithm. These applications show the great potential of the proposed outlier edge detection techniques. They also address the importance of analyzing the edges in graph mining—a topic that has been mostly neglected by researchers.
EXT="Kiranyaz, Serkan"
Research output: Contribution to journal › Article › Scientific › peer-review
Video coding technology in the last 20 years has evolved producing a variety of different and complex algorithms and coding standards. So far the specification of such standards, and of the algorithms that build them, has been done case by case providing monolithic textual and reference software specifications in different forms and programming languages. However, very little attention has been given to provide a specification formalism that explicitly presents common components between standards, and the incremental modifications of such monolithic standards. The MPEG Reconfigurable Video Coding (RVC) framework is a new ISO standard currently under its final stage of standardization, aiming at providing video codec specifications at the level of library components instead of monolithic algorithms. The new concept is to be able to specify a decoder of an existing standard or a completely new configuration that may better satisfy application-specific constraints by selecting standard components from a library of standard coding algorithms. The possibility of dynamic configuration and reconfiguration of codecs also requires new methodologies and new tools for describing the new bitstream syntaxes and the parsers of such new codecs. The RVC framework is based on the usage of a new actor/ dataflow oriented language called CAL for the specification of the standard library and instantiation of the RVC decoder model. This language has been specifically designed for modeling complex signal processing systems. CAL dataflow models expose the intrinsic concurrency of the algorithms by employing the notions of actor programming and dataflow. The paper gives an overview of the concepts and technologies building the standard RVC framework and the non standard tools supporting the RVC model from the instantiation and simulation of the CAL model to software and/or hardware code synthesis.
Research output: Contribution to journal › Article › Scientific › peer-review
Run-time attacks against programs written in memory-unsafe programming languages (e.g., C and C++) remain a prominent threat against computer systems. The prevalence of techniques like return-oriented programming (ROP) in attacking real-world systems has prompted major processor manufacturers to design hardware-based countermeasures against specific classes of run-time attacks. An example is the recently added support for pointer authentication (PA) in the ARMv8-A processor architecture, commonly used in devices like smartphones. PA is a low-cost technique to authenticate pointers so as to resist memory vulnerabilities. It has been shown to enable practical protection against memory vulnerabilities that corrupt return addresses or function pointers. However, so far, PA has received very little attention as a general purpose protection mechanism to harden software against various classes of memory attacks. In this paper, we use PA to build novel defenses against various classes of run-time attacks, including the first PA-based mechanism for data pointer integrity. We present PARTS, an instrumentation framework that integrates our PA-based defenses into the LLVM compiler and the GNU/Linux operating system and show, via systematic evaluation, that PARTS provides better protection than current solutions at a reasonable performance overhead.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Managing master data as an organization-wide function enforces changes in responsibilities and established ways of working. These changes cause tensions in the organization and can result in conflicts. Understanding these tensions and mechanisms helps the organization to manage the change more effectively. The tensions and conflicts are studied through the theory of paradox. The object of this paper is to identify paradoxes in a Master Data Management (MDM) development process and the factors that contribute to the emergence of these conflicts. Altogether thirteen MDM specific paradoxes were identified and factors leading to them were presented. Paradoxes were grouped into categories that represent the organization's core activities to understand how tensions are embedded within the organization, and how they are experienced. Five paradoxes were observed more closely to illustrate the circumstances they appear. Working through the tensions also sheds light on the question of how these paradoxes should be managed. This example illustrates how problems emerge as dilemmas and evolve into paradoxes.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Professional
Digital predistortion (DPD) is a widely adopted baseband processing technique in current radio transmitters. While DPD can effectively suppress unwanted spurious spectrum emissions stemming from imperfections of analog RF and baseband electronics, it also introduces extra processing complexity and poses challenges on efficient and flexible implementations, especially for mobile cellular transmitters, considering their limited computing power compared to basestations. In this paper, we present high data rate implementations of broadband DPD on modern embedded processors, such as mobile GPU and multicore CPU, by taking advantage of emerging parallel computing techniques for exploiting their computing resources. We further verify the suppression effect of DPD experimentally on real radio hardware platforms. Performance evaluation results of our DPD design demonstrate the high efficacy of modern general purpose mobile processors on accelerating DPD processing for a mobile transmitter.
Research output: Contribution to journal › Article › Scientific › peer-review
In recent work, a graphical modeling construct called "topological patterns" has been shown to enable concise representation and direct analysis of repetitive dataflow graph sub-structures in the context of design methods and tools for digital signal processing systems (Sane et al. 2010). In this paper, we present a formal design method for specifying topological patterns and deriving parameterized schedules from such patterns based on a novel schedule model called the scalable schedule tree. The approach represents an important class of parameterized schedule structures in a form that is intuitive for representation and efficient for code generation. Through application case studies involving image processing and wireless communications, we demonstrate our methods for topological pattern representation, scalable schedule tree derivation, and associated dataflow graph code generation.
Research output: Contribution to journal › Article › Scientific › peer-review
The energy requirements of cities' inhabitants have grown during the last decade. Recent studies justify the necessity of reducing the energy consumption/emissions in cities. The present paper gives an overview of the factors affecting the energy consumption of the citizens based on studies conducted in cities across the globe. The studies cover all the factors that affect citizens' mobility choice that at the end, affects in the same way their final energy consumption. The results of the review are being used to support authorities in mobility decisions in order to achieve a sustainable transport sector in smart cities.
AUX=ase,"Mantilla R., M. Fernanda"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Condition pre-enforcement is one of the known methods for rights adaptation. Related to the integration of the rights exporting process, we identify issues introduced by condition pre-enforcement and potential risks of granting unexpected rights when exporting rights back and forth. We propose a solution to these problems in a form of a new algorithm called Passive Condition Pre-enforcement (PCP), and discuss the impact of PCP to the existing process of rights exporting.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Chapter › Scientific › peer-review
The block-based multi-metric fusion (BMMF) is one of the state-of-the-art perceptual image quality assessment (IQA) schemes. With this scheme, image quality is analyzed in a block-by-block fashion according to the block content type (i.e. smooth, edge and texture blocks) and the distortion type. Then, a suitable IQA metric is adopted to evaluate the quality of each block. Various fusion strategies to combine the QA scores of all blocks are discussed in this work. Specifically, factors such as quality scores distribution and the spatial distribution of each block are examined using statistics methods. Finally, we compare the performance of various fusion strategies based on the popular TID database. © 2012 APSIPA.
Contribution: organisation=sgn,FACT1=1<br/>Portfolio EDEND: 2013-03-29
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Introduction: In 3GPP New Radio (NR) systems, frequent radio propagation path blockages can lead to the disconnection of ongoing sessions already accepted into the system, reducing the quality of service in the network. Controlling access to system resource by prioritizing for the ongoing sessions can increase the session continuity. In this paper, we propose resource allocation with a reservation mechanism. Purpose: Development of a mathematical model for analyzing the effect of this mechanism on other system performance indicators - dropping probabilities for new and ongoing sessions and system utilization. The model takes into account the key features of the 3GPP NR technology, including the height of the interacting objects, the spatial distribution and mobility of the blockers, as well as the line-of-sight propagation properties between the transceivers for mmWave NR technology. Results: We analyzed the reservation mechanism with the help of a developed model in the form of a resource queueing system with signals, where the base station bandwidth corresponds to the resource, and the signals model a change in the line-of-sight conditions between the receiving and transmitting devices. Creating a priority for ongoing sessions whose service has not yet been completed provides a considerable flexibility for balancing the session continuity and dropping of a new session, with a slight decrease in the efficiency of the radio resource utility. With the developed model, we showed that reserving even a small bandwidth (less than 10% of the total resources) to maintain the ongoing sessions has a positive effect on their continuity, as it increases the probability of their successful completion. Practical relevance: The proposed mechanism works more efficiently in overload conditions and with sessions which have a high data transfer rate requirements. This increases the demand for the proposed mechanism in 5G NR communication systems.
Research output: Contribution to journal › Article › Scientific › peer-review
Web performance optimization tries to minimize the time in which web pages are downloaded and displayed on the web browser. It also means that the sizes of website resources are usually minimized. By optimizing their websites, organizations can verify the quality of response times on their websites. This increases visitor loyalty and user satisfaction. A fast website is also important for search engine optimization. Minimized resources also cut the energy consumption of the Internet. In spite of the importance of optimization, there has not been so much research work to find out how much the comprehensive optimization of a website can reduce load times and the sizes of web resources. This study presents the results related to an optimization work where all the resources of the website were optimized. The results obtained were very significant. The download size of the front page was reduced by a total of about 80 percent and the downloading time about 60 percent. The server can now handle more than three times as much concurrent users as earlier.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Applications such as robot control and wireless communication require planning under uncertainty. Partially observable Markov decision processes (POMDPs) plan policies for single agents under uncertainty and their decentralized versions (DEC-POMDPs) find a policy for multiple agents. The policy in infinite-horizon POMDP and DEC-POMDP problems has been represented as finite state controllers (FSCs). We introduce a novel class of periodic FSCs, composed of layers connected only to the previous and next layer. Our periodic FSC method finds a deterministic finite-horizon policy and converts it to an initial periodic infinitehorizon policy. This policy is optimized by a new infinite-horizon algorithm to yield deterministic periodic policies, and by a new expectation maximization algorithm to yield stochastic periodic policies. Our method yields better results than earlier planningmethods and can compute larger solutions than with regular FSCs.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Regarding sustainable development, there is a growing need to gather more and more various kinds of measurement, space, and consumption information about property. The necessity for property condition measurement is apparent and the appropriate circumstances, such as indoor air quality and suitable temperature, have an essential influence on comfort and welfare at work and, at the same time, have significance in terms of energy efficiency. This paper presents a portable prototype system for property condition measurement. The objective was to generate a reliable system that improves the quality and also the visual presentation of the collected data. The paper presents the components of the system and the technology utilized to implement the system. The results of piloting in a real-life environment, where particular focus was placed on both controlling energy efficiency and well-being at work, are also presented.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper presents an integrated self-aware computing model mitigating the power dissipation of a heterogeneous reconfigurable multicore architecture by dynamically scaling the operating frequency of each core. The power mitigation is achieved by equalizing the performance of all the cores for an uninterrupted exchange of data. The multicore platform consists of heterogeneous Coarse-Grained Reconfigurable Arrays (CGRAs) of application-specific sizes and a Reduced Instruction-Set Computing (RISC) core. The CGRAs and the RISC core are integrated with each other over a Network-on-Chip (NoC) of six nodes arranged in a topology of two rows and three columns. The RISC core constantly monitors and controls the performance of each CGRA accelerator by adjusting the operating frequencies unless the performance of all the CGRAs is optimally balanced over the platform. The CGRA cores on the platform are processing some of the most computationally-intensive signal processing algorithms while the RISC core establishes packet based synchronization between the cores for computation and communication. All the cores can access each other’s computational and memory resources while processing the kernels simultaneously and independently of each other. Besides general-purpose processing and overall platform supervision, the RISC processor manages performance equalization among all the cores which mitigates the overall dynamic power dissipation by 20.7 % for a proof-of-concept test.
Research output: Contribution to journal › Article › Scientific › peer-review
Context: Unhandled code exceptions are often the cause of a drop in the number of users. In the highly competitive market of Android apps, users commonly stop using applications when they find some problem generated by unhandled exceptions. This is often reflected in a negative comment in the Google Play Store and developers are usually not able to reproduce the issue reported by the end users because of a lack of information. Objective: In this work, we present an industrial case study aimed at prioritizing the removal of bugs related to uncaught exceptions. Therefore, we (1) analyzed crash reports of an Android application developed by a public transportation company, (2) classified uncaught exceptions that caused the crashes; (3) prioritized the exceptions according to their impact on users. Results: The analysis of the exceptions showed that seven exceptions generated 70% of the overall errors and that it was possible to solve more than 50% of the exceptions-related issues by fixing just six Java classes. Moreover, as a side result, we discovered that the exceptions were highly correlated with two code smells, namely “Spaghetti Code” and “Swiss Army Knife”. The results of this study helped the company understand how to better focus their limited maintenance effort. Additionally, the adopted process can be beneficial for any Android developer in understanding how to prioritize the maintenance effort.
EXT="Lenarduzzi, Valentina"
jufoid=71106
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Providing sufficient mobile coverage during mass public events or critical situations is a highly challenging task for the network operators. To fulfill the extreme capacity and coverage demands within a limited area, several augmenting solutions might be used. Among them, novel technologies like a fleet of compact base stations mounted on Unmanned Aerial Vehicles (UAVs) are gaining momentum because of their time- and cost- efficient deployment. Despite the fact that the concept of aerial wireless access networks has been investigated recently in many research studies, there are still numerous practical aspects that require further understanding and extensive evaluation. Taking this as a motivation, in this paper, we develop the concept of continuous wireless coverage provisioning by the means of UAVs and assess its usability in mass scenarios with thousands of users. With our system-level simulations as well as a measurement campaign, we take into account a set of important parameters including weather conditions, UAV speed, weight, power consumption, and millimeter- wave (mmWave) antenna configuration. As a result, we provide more realistic data about the performance of the access and backhaul links together with the practical lessons learned about the design and real-world applicability of the UAV-enabled wireless access networks.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Insofar as our cultural heritage (CH) has become not only an economic resource but a key element in defining our identity, its accurate and flexible documentation has emerged as an essential task. The generation of 3D information with physical and functional characteristics is now possible through the connection of survey data with Historical Building Information Modeling (HBIM). However, few studies have focused on the semantic enrichment process of models based on point clouds, especially on the field of cultural heritage. These singularities make the conversion of point cloud to 'as-built' HBIM an expensive process from the mathematical and computational viewpoint. At present, there is no software that guarantees automatic and efficient data conversion in architectural or urban contexts. The ongoing research 'Documenting and Visualizing Industrial Heritage' is conducted by the School of Architecture, Tampere University of Technology, Finland based on an Open Notebook Reserarch Model. It is focused on advance the knowledge of digital operating environments for the representation and management of historical buildings and sites. On the one hand, the research is advancing in three-dimensional 'as-built' modeling based on remote sensing data, while on the other hand is aiming to incorporate more qualitative information based on concepts of production and management in the lifecycle of the built environment. The purpose of this presentation is to discuss the different approaches to date on the HBIM generation chain: from 3D point cloud data collection to semantically enriched parametric models.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The upcoming Reconfigurable Video Coding (RVC) standard from MPEG (ISO / IEC SC29WG11) defines a library of coding tools to specify existing or new compressed video formats and decoders. The coding tool library has been written in a dataflow/actor-oriented language named CAL. Each coding tool (actor) can be represented with an extended finite state machine and the data communication between the tools are described as dataflow graphs. This paper proposes an approach to model the CAL actor network with Parameterized Synchronous Data Flow and to derive a quasi-static multiprocessor execution schedule for the system. In addition to proposing a scheduling approach for RVC, an extension to the well-known permutation flow shop scheduling problem that enables rapid run-time scheduling of RVC tasks, is introduced.
Research output: Contribution to journal › Article › Scientific › peer-review
Motivated by the unprecedented penetration of mobile communications technology, this work carefully brings into perspective the challenges related to heterogeneous communications and offloaded computation operating in cases of fault-tolerant computation, computing, and caching. We specifically focus on the emerging augmented reality applications that require reliable delegation of the computing and caching functionality to proximate resource-rich devices. The corresponding mathematical model proposed in this work becomes of value to assess system-level reliability in cases where one or more nearby collaborating nodes become temporarily unavailable. Our produced analytical and simulation results corroborate the asymptotic insensitivity of the stationary reliability of the system in question (under the "fast" recovery of its elements) to the type of the "repair" time distribution, thus supporting the fault-tolerant system operation.
Research output: Contribution to journal › Article › Scientific › peer-review
Clustering-based Discriminant Analysis (CDA) is a well-known technique for supervised feature extraction and dimensionality reduction. CDA determines an optimal discriminant subspace for linear data projection based on the assumptions of normal subclass distributions and subclass representation by using the mean subclass vector. However, in several cases, there might be other subclass representative vectors that could be more discriminative, compared to the mean subclass vectors. In this paper we propose an optimization scheme aiming at determining the optimal subclass representation for CDA-based data projection. The proposed optimization scheme has been evaluated on standard classification problems, as well as on two publicly available human action recognition databases providing enhanced class discrimination, compared to the standard CDA approach.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Mobile application ecosystems have growth rapidly in the past few years. Increasing number of startups and established developers are alike offering their products in different marketplaces such as Android Market and Apple App Store. In this paper, we are studying revenue models used in Android Market. For analysis, we gathered the data of 351,601 applications from their public pages at the marketplace. From these, a random sample of 100 applications was used in a qualitative study of revenue streams. The results indicate that a part of the marketplace can be explained with traditional models but free applications use complex revenue models. Basing on the qualitative analysis, we identified four general business strategy categories for further studies.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Since September 2015 at least two major crises have emerged where major industrial companies producing consumer products have been involved. In September 2015 diesel cars manufactured by Volkswagen turned out to be equipped with cheating software that caused NO2 and other emission values to be reduced to acceptable levels while tested from the real, unacceptable values in normal use. In August 2016 reports began to appear that the battery of a new smart phone produced by Samsung, Galaxy Note7, could begin to burn, or even explode, while the device was on. In Nov. 2016 also 34 washing machine models were reported to have caused damages due to disintegration. In all cases, the companies have experienced substantial financial losses, their shares have lost value, and their reputation has suffered among consumers and other stakeholders. In this paper, we study the commonalities and differences in the crisis management strategies of the companies, mostly concentrating on the crisis communication aspects. We draw on Situational Crisis Communication Theory (SCCT). The communication behaviour of the companies and various stakeholders during crisis is performed by investigating the official web sites of the companies and communication in Twitter and Facebook on their own accounts. We also collected streaming data from Twitter where Samsung and the troubled smart phone or washing machines were mentioned. For VW we also collected streaming data where the emission scandal or its ramifications were mentioned and performed several analyses, including sentiment analysis.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Current search engines offer limited assistance for exploration and information discovery in complex search tasks. Instead, users are distracted by the need to focus their cognitive efforts on finding navigation cues, rather than selecting relevant information. Interactive intent modeling enhances the human information exploration capacity through computational modeling, visualized for interaction. Interactive intent modeling has been shown to increase task-level information seeking performance by up to 100%. In this demonstration, we showcase SciNet, a system implementing interactive intent modeling on top of a scientific article database of over 60 million documents.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Professional
Purpose – The purpose of this paper is to discuss the ways in which information acts as a commodity in massively multiplayer online role-playing games (MMORPGs), and how players pay for items and services with information practices. Design/methodology/approach – Through meta-theoretical analysis of the game environment as a set of information systems, one of retrieval and one social, the paper shows how players’ information practices influence their access to game content, organizational status and relationship to real-money trade. Findings – By showing how information trading functions in MMORPGs, the paper displays the importance of information access for play, the efficiency of real money trade and the significance of information practice -based services as a relatively regular form of payment in virtual worlds. Players furthermore shown to contribute to the information economy of the game with the way in which they decide not to share some information, so as to prevent others from a loss of game content value due to spoilers. Originality/value – The subject, despite the popularity of online games, has been severely understudied within library and information science. The paper contributes to that line of research, by showing how games function as information systems, and by explaining how they, as environments and contexts, influence and are influenced by information practices.
Research output: Contribution to journal › Article › Scientific › peer-review
The SiMPE workshop series started in 2006 [2] with the goal of enabling speech processing on mobile and embedded devices to meet the challenges of pervasive environments (such as noise) and leveraging the context they offer (such as location). SiMPE 2010 and 2011 brought together researchers from the speech and the HCI communities. Multimodality got more attention in SiMPE 2008 than it had received in the previous years. In SiMPE 2007, the focus was on developing regions. Speech User interaction in cars was a focus area in 2009. With SiMPE 2012, the 7th in the series, we hope to explore the area of speech along with sound. When using the mobile in an eyes-free manner, it is natural and convenient to hear about notifications and events. The arrival of an SMS has used a very simple sound based notification for a long time now. The technologies underlying speech processing and sound processing are quite different and these communities have been working mostly independent of each other. And yet, for multimodal interactions on the mobile, it is perhaps natural to ask whether and how speech and sound can be mixed and used more effectively and naturally.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The prospects of the inband full-duplex (IBFD) technology are praised in non-military communications as it allows each radio to simultaneously transmit and receive (STAR) on the same frequencies enabling, e.g., enhanced spectral efficiency. Likewise, future defense forces may significantly benefit from the concept, because a military full-duplex radio (MFDR) would be capable of simultaneous integrated tactical communication and electronic warfare operations as opposed to the ordinary time- or frequency-division half-duplex radios currently used in all military applications. This study considers one particular application, where the MFDR performs jamming against an opponent's radio control (RC) system while simultaneously monitoring RC transmissions and/or receiving data over the air from an allied communication transmitter. The generic RC system can represent particularly, e.g., one pertaining to multicopter drones or roadside bombs. Specifically, this paper presents outcomes from recent experiments that are carried out outdoors while earlier indoor results are also revisited for reference. In conclusion, the results demonstrate that MFDRs can be viably utilized for RC signal detection purposes despite the residual self-interference due to jamming and imperfect cancellation.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Molecular communication holds the promise to enable communication between nanomachines with a view to increasing their functionalities and opening up new possible applications. Due to some of the biological properties, bacteria have been proposed as a possible information carrier for molecular communication, and the corresponding communication networks are known as bacterial nanonetworks. The biological properties include the ability for bacteria to mobilize between locations and carry the information encoded in deoxyribonucleic acid molecules. However, similar to most organisms, bacteria have complex social properties that govern their colony. These social characteristics enable the bacteria to evolve through various fluctuating environmental conditions by utilizing cooperative and non-cooperative behaviors. This article provides an overview of the different types of cooperative and non-cooperative social behavior of bacteria. The challenges (due to non-cooperation) and the opportunities (due to cooperation) these behaviors can bring to the reliability of communication in bacterial nanonetworks are also discussed. Finally, simulation results on the impact of bacterial cooperative social behavior on the end-to-end reliability of a single-link bacterial nanonetwork are presented. The article concludes by highlighting the potential future research opportunities in this emerging field.
Research output: Contribution to journal › Article › Scientific › peer-review
As the Internet of Vehicles matures and acquires its social flavor, novel wireless connectivity enablers are being demanded for reliable data transfer in high-rate applications. The recently ratified New Radio communications technology operates in millimeter-wave (mmWave) spectrum bands and offers sufficient capacity for bandwidth-hungry services. However, seamless operation over mmWave is difficult to maintain on the move, since such extremely high frequency radio links are susceptible to unexpected blockage by various obstacles, including vehicle bodies. As a result, proactive mode selection, that is, migration from infrastructure- to vehicle-based connections and back, is becoming vital to avoid blockage situations. Fortunately, the very social structure of interactions between the neighboring smart cars and their passengers may be leveraged to improve session continuity by relaying data via proximate vehicles. This paper conceptualizes the socially inspired relaying scenarios, conducts underlying mathematical analysis, continues with a detailed 3-D modeling to facilitate proactive mode selection, and concludes by discussing a practical prototype of a vehicular mmWave platform.
Research output: Contribution to journal › Article › Scientific › peer-review
Research output: Contribution to journal › Review Article › Scientific › peer-review