This article introduces the Zeffiro interface (ZI) version 2.2 for brain imaging. ZI aims to provide a simple, accessible and multimodal open source platform for finite element method (FEM) based and graphics processing unit (GPU) accelerated forward and inverse computations in the Matlab environment. It allows one to (1) generate a given multi-compartment head model, (2) to evaluate a lead field matrix as well as (3) to invert and analyze a given set of measurements. GPU acceleration is applied in each of the processing stages (1)–(3). In its current configuration, ZI includes forward solvers for electro-/magnetoencephalography (EEG) and linearized electrical impedance tomography (EIT) as well as a set of inverse solvers based on the hierarchical Bayesian model (HBM). We report the results of EEG and EIT inversion tests performed with real and synthetic data, respectively, and demonstrate numerically how the inversion parameters affect the EEG inversion outcome in HBM. The GPU acceleration was found to be essential in the generation of the FE mesh and the LF matrix in order to achieve a reasonable computing time. The code package can be extended in the future based on the directions given in this article.
Research output: Contribution to journal › Article › Scientific › peer-review
In information system (IS) acquisition, one of the major challenges is to carry out required changes in the organization. One major problem is the lack of organizational support, user participation and competence. The process of gaining organizational support has been presented as the legitimation process. The legitimation process includes the actions taken by a legitimation seeker to gain legitimation from legitimation providers. In IS acquisition, the individuals' behavioural patterns can be perceived as representing specific roles. Published studies combining these roles and actors in the legitimation process in IS acquisition are rare. Consequently, we will explore the roles in the IS acquisition legitimation process in two cases. As a result, we illustrate how legitimation appears in practice and provide a deeper understanding of how different roles act in legitimating IS acquisitions.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Professional
In recent years, technology has been increasingly harnessed for motivating and supporting people toward various individually and collectively beneficial behaviors. One of the most popular developments in this field has been titled gamification. Gamification refers to technologies that attempt to promote intrinsic motivations toward various activities, commonly, by employing design characteristic to games. However, a dearth of empirical evidence still exists regarding why people want to use gamification services. Based on survey data gathered from the users of a gamification service, we examine the relationship between utilitarian, hedonic and social motivations and continued use intention as well as attitude toward gamification. The results suggest that the relationship between utilitarian benefits and use is mediated by the attitude toward the use of gamification, while hedonic aspects have a direct positive relationship with use. Social factors are strongly associated with attitude, but show only a weak further association with the intentions to continue the use of a gamification service.
Research output: Contribution to journal › Article › Scientific › peer-review
During the last decade games have arguably become the largest form of leisure information systems (IS). However, today games are also increasingly being employed for a variety of instrumental purposes. Although games have garnered a substantial amount of research attention during the last decade, research literature is scattered and there is still a lack of a clear and reliable understanding of why games are being used, and how they are placed in the established utilitarian-hedonic continuum of information systems. To address this gap, we conducted a meta-analysis of the quantitative body of literature that has examined the reasons for using games (48 studies). Additionally, we compared the findings across games that are intended for either leisure or instrumental use. Even though games are generally regarded as a pinnacle form of hedonically-oriented ISs, our results show that enjoyment and usefulness are equally important determinants for using them (though their definitive role varies between game types). Therefore, it can be posited that games are multi-purpose ISs which nevertheless rely on hedonic factors, even in the pursuit of instrumental outcomes. The present study contributes to and advances our theoretical and empirical understanding of multi-purpose ISs and the ways in which they are used.
Research output: Contribution to journal › Review Article › Scientific › peer-review
In this study we investigate purchase behavior for virtual goods in three free-to-play game environments. In the modern free games, publishers sell virtual goods in order to generate revenue. However, game publishers face dire negative attitudes toward the business model as it can entice publishers to degrade the enjoyment of the game in order to sell more virtual goods that address the artificial gaps in the game. This study focuses on this looming question in the game industry whether people buy virtual goods because they enjoy the game and want to keep on playing it or rather because their attitudes toward virtual goods are favorable and they believe it is also accepted in the peer-group. Player responses (N = 2791) were gathered from three different game types: social virtual world (Habbo) (n = 2156), first-person shooters (n = 398), and social networking games (Facebook games) (n = 237). The results support both main hypotheses (1) enjoyment of the game reduces the willingness to buy virtual goods while at the same time it increases the willingness to play more of the game. Continued use, however, does positively predict purchase intentions for virtual goods. (2) Attitude toward virtual goods and the beliefs about peers' attitudes strongly increase the willingness to purchase virtual goods. Beyond these interesting results the paper points to several further lines of inquiry.
Research output: Contribution to journal › Article › Scientific › peer-review
Interactive television provides useful services for older people. These include social networking tools, video on demand, and broadcast TV. Many of the Internet-mediated services require text entry. The usual multi-tap text entry supplied with TV remote control is not suitable to many older people. In this paper, we evaluate WeSlide, a gestural text entry technique that uses the Wiimote as the input device. We conducted a study to compare WeSlide with the multi-tap technique. WeSlide was faster and less error prone and users strongly preferred it over multi-tap.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The health of a software ecosystem is argued to be a key indicator of well-being, longevity and performance of a network of companies. In this paper, we address what scientific literature actually means with the concept of ‘ecosystem health’ by selecting relevant articles with systematic literature review. Based on the final set of 38 papers, we found that despite a common base, the term has been used to depict a wide range of hoped characteristics of a software ecosystem. However, the number of studies addressing the topic is shown to grow while empirical studies are still rare. Thus, further studies should aim to standardize the terminology and concepts in order to create a common base for future work. Further work is needed also to develop early indicators that warn and guides companies on problems with their ecosystems.
EXT="Hyrynsalmi, Sami"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Via the recent proliferation of consumer-grade head-mounted VR technologies, the retailers as well as related scholarly areas have started to increasingly notice the possible potential of virtual reality. However, there is no coherent understanding of the state-of-the-art of the literature on VR shopping, how VR shopping has been investigated and what empirically indicated benefits VR has for a variety of marketing outcomes. Therefore, in this paper, we systematically review the published body of literature on VR shopping (N = 40). The current study contributes to the VR shopping and marketing literature by mapping the VR technologies, product types, consumer experiences and research methods in the extant literature. The review shows that the literature on VR shopping is still in its infancy and there remains ample room for progression both in breadth and depth in the literature on VR shopping in terms of methodological rigor and theoretical prowess.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A number of spatially-localised semantic parts of vehicles sensitive to pose changes are encoded their visible probabilities into a mid-level feature vector. Car pose estimation is then formulated into a regression on concatenated low-and mid-level features to continuously changing viewing angles. Each dimension of our visibility-Aware part codes separates all the training samples into two groups according to its visual existence in images, which provides additional part-specific range constraint of viewing angles. Moreover, the proposed codes can alleviate the suffering from sparse and imbalanced data distribution in the light of modelling latent dependency across angle targets. Experimental evaluation for car pose estimation on the EPFL Multi-View Car benchmark demonstrates significant improvement of our method over the state-of-The-Art regression methods, especially when only sparse and imbalanced data is available.
EXT="Chen, Ke"
jufoid=79229
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Many corpus linguists make the tacit assumption that part-of-speech frequencies remain constant during the period of observation. In this article, we will consider two related issues: (1) the reliability of part-of-speech tagging in a diachronic corpus and (2) shifts in tag ratios over time. The purpose is both to serve the users of the corpus by making them aware of potential problems, and to obtain linguistically interesting results. We use noun and pronoun ratios as diagnostics indicative of opposing stylistic tendencies, but we are also interested in testing whether any observed variation in the ratios could be accounted for in sociolinguistic terms. The material for our study is provided by the Parsed Corpus of Early English Correspondence (PCEEC), which consists of 2.2 million running words covering the period 1415-1681. The part-of-speech tagging of the PCEEC has its problems, which we test by reannotating the corpus according to our own principles and comparing the two annotations. While there are quite a few changes, the mean percentage of change is very small for both nouns and pronouns. As for variation over time, the mean frequency of nouns declines somewhat, while the mean frequency of pronouns fluctuates with no clear diachronic trend. However, women consistently use more pronouns than men, while men use more nouns than women. More fine-grained distinctions are needed to uncover further regularities and possible reasons for this variation.
Research output: Contribution to journal › Article › Scientific › peer-review
Startups are creating innovative new products and services while seeking fast growth with little resources. The capability to produce software products with good user experience (UX) can help the startup to gain positive attention and revenue. Practices and needs for UX design in startups are not well understood. Research can provide insight on how to design UX with little resources as well as to gaps about what kind of better practices should be developed. In this paper we describe the results of an interview study with eight startups operating in Finland. Current UX practices, challenges and needs for the future were investigated. The results show that personal networks have a significant role in helping startups gain professional UX advice as well as user feedback when designing for UX. When scaling up startups expect usage data and analytics to guide them towards better UX design.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Open education and distance learning are not new pedagogical innovations. However, through the introduction of Massive Open Online Courses (MOOC), they have recently attracted a great deal of attention among educational establishments. MOOCs can be considered a threat to small universities, but, on the other hand, they can also be a means of providing opportunities to develop their core activities. The challenge is how universities will perceive this phenomenon and take advantage of the new chances it brings. This paper examines the utilization of MOOCs from several points of view. The focus is on degree courses and continuing education offered by universities, but in-house personnel training in companies is also discussed. The issue is how to find proper ways to utilize third-party MOOCs in these three domains. Based on our investigations, the paper introduces a preliminary model for exploiting MOOCs in the development of education and training programs.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
When utilising multidimensional OLAP (On-Line Analytic Processing) analysis models in Business Intelligence analysis, it is common that the users need to add new, unanticipated dimensions to the OLAP cube. In a conventional implementation, this would imply frequent re-designs of the cube's dimensions. We present an alternative method for the addition of new dimensions. Interestingly, the same design method can also be used to import EAV (Entity-Attribute-Value) tables into a cube. EAV tables have earlier been used to represent extremely sparse data in applications such as biomedical databases. Though space-efficient, EAV-representation can be awkward to query. Our EAV-to-OLAP cube methodology has an advantage of managing many-to-many relationships in a natural manner. Simple theoretical analysis shows that the methodology is efficient in space consumption. We demonstrate the efficiency of our approach in terms of the speed of OLAP cube re-processing when importing EAV-style data, comparing the performance of our cube design method with the performance of the conventional cube design.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Elicitation studies allow collecting interaction methods directly from end-users by presenting the users with the end effect of an operation and then asking them to perform the action that caused it. Applying elicitation studies in the domain of collocated interaction might enable designing more intuitive and natural group interaction methods. However, in the past elicitation studies have primarily been conducted with individual users - they have rarely been applied to groups. In this paper, we report our initial experiences in using the elicitation study methodology to generate interaction methods for groups of collocated users with wearable devices.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Purpose – The purpose of this paper is to aim at modelling the trails, which are search patterns with several search systems across the heterogeneous information environment. In addition, the author seeks to examine what kinds of trails occur in routine, semi-complex and complex tasks, and what barrier types occur during the trail-blazing. Design/methodology/approach – The author used qualitative task-based approach with shadowing of six molecular medicine researchers during six months, and collected their web interaction logs. Data triangulation made this kind of detailed search system integration analysis possible. Findings – Five trail patterns emerged: branches, chains, lists, singles and berrypicking trails. The berrypicking was typical to complex work tasks, whereas the branches were common in routine work tasks. Singles and lists were employed typically in semi-complex tasks. In all kinds of trails, the barriers occurred often during the interaction with a single system, but there was a considerable number of barriers with the malfunctioning system integration, and lacking integration features. The findings propose that the trails could be used to reduce the amount of laborious manual system integration, and that there is a need for support to explorative search process in berrypicking trails. Originality/value – Research of information behaviour yielding to different types of search patters with several search systems during real-world work task performance in molecular medicine have not been published previously. The author presents a task-based approach how to model search behaviour patterns. The author discusses the issue of system integration, which is a great challenge in biomedical domain, from the viewpoints of information studies and search behaviour.
Research output: Contribution to journal › Article › Scientific › peer-review
A software defect that exposes a software system to a cyber security attack is known as a software vulnerability. A software security exploit is an engineered software solution that successfully exploits the vulnerability. Exploits are used to break into computer systems, but exploits are currently used also for security testing, security analytics, intrusion detection, consultation, and other legitimate and legal purposes. A well-established market emerged in the 2000s for software vulnerabilities. The current market segments populated by small and medium-sized companies exhibit signals that may eventually lead to a similar industrialization of software exploits. To these ends and against these industry trends, this paper observes the first online market place for trading exploits between buyers and sellers. The paper adopts three different perspectives to study the case. The paper (a) portrays the studied exploit market place against the historical background in the software security industry. A qualitative assessment is made to (b) evaluate the case against the common characteristics of traditional online market places. The qualitative observations are used in the quantitative part (c) for predicting the price of exploits with partial least squares regression. The results show that (i) the case is unique from a historical perspective, although (ii) the online market place characteristics are familiar. The regression estimates also indicate that (iii) the pricing of exploits is only partially dependent on such factors as the targeted platform, the date of disclosure of the exploited vulnerability, and the quality assurance service provided by the market place provider. The results allow to contemplate (iv) practical means for enhancing the market place.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Cloud orchestration frameworks are commonly used to deploy and operate cloud infrastructure. Their role spans both vertically (deployment on infrastructure, platform, application and microservice levels) and horizontally (deployments from many distinct cloud resource providers). However, despite the central role of orchestration, the popular orchestration frameworks lack mechanisms to provide security guarantees for cloud operators. In this work, we analyze the security landscape of cloud orchestration frameworks for multi-cloud infrastructure. We identify a set of attack scenarios, define security enforcement enablers and propose an architecture for a security-enabled cloud orchestration framework for multi-cloud application deployments.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Dataflow languages enable describing signal processing applications in a platform independent fashion, which makes them attractive in today's multiprocessing era. RVC-CAL is a dynamic dataflow language that enables describing complex data-dependent programs such as video decoders. To this date, design automation toolchains for RVC-CAL have enabled creating workstation software, dedicated hardware and embedded application specific multiprocessor implementations out of RVC-CAL programs. However, no solution has been presented for executing RVC-CAL applications on generic embedded multiprocessing platforms. This paper presents a dataflow-based multiprocessor communication model, an architecture prototype that uses it and an automated toolchain for instantiating such a platform and the software for it. The complexity of the platform increases linearly as the number of processors is increased. The experiments in this paper use several instances of the proposed platform, with different numbers of processors. An MPEG-4 video decoder is mapped to the platform and executed on it. Benchmarks are performed on an FPGA board.
Research output: Contribution to journal › Article › Scientific › peer-review
To improve both the quality and the trustworthiness perception of Open Source Software (OSS) products, we introduce the new idea of certifying the testing process of an OSS system. While the global certification of an OSS product is an emerging research field, the idea of certifying only its testing process has never been studied, conversely to the case of Closed Source Software (CSS) products. The certification of the testing process has a twofold goal: simplify the process of testing OSS products by guiding developers in identifying the proper testing strategies and the limitations of their existing testing plans; simplify the selection of equivalent OSS and CSS products by evaluating the certificates released by the companies. Specifically, in this paper we discuss 1) a set of issues, inherent to OSS, that must be taken into account when testing the OSS product; 2) a preliminary methodology that suggests how to certificate the testing process of OSS products; 3) the BusyBox case study that shows how our idea can be applied to real-life OSS.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Software Quality Assurance is a complex and time-expensive task. In this study we want to observe how agile developers react to just-in-time metrics about the code smells they introduce, and how the metrics influence the quality of the output.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The increasing number of cores in System on Chips (SoC) has introduced challenges in software parallelization. As an answer to this, the dataflow programming model offers a concurrent and reusability promoting approach for describing applications. In this work, a runtime for executing Dataflow Process Networks (DPN) on multicore platforms is proposed. The main difference between this work and existing methods is letting the operating system perform Central processing unit (CPU) load-balancing freely, instead of limiting thread migration between processing cores through CPU affinity. The proposed runtime is benchmarked on desktop and server multicore platforms using five different applications from video coding and telecommunication domains. The results show that the proposed method offers significant improvements over the state-of-art, in terms of performance and reliability.
Research output: Contribution to journal › Article › Scientific › peer-review
Tools for designing signal processing systems with their semantic foundation in dataflow modeling often use high-level graphical user interfaces (GUIs) or text based languages that allow specifying applications as directed graphs. Such graphical representations serve as an initial reference point for further analysis and optimizations that lead to platform-specific implementations. For large-scale applications, the underlying graphs often consist of smaller substructures that repeat multiple times. To enable more concise representation and direct analysis of such substructures in the context of high level DSP specification languages and design tools, we develop the modeling concept of topological patterns, and propose ways for supporting this concept in a high-level language. We augment the dataflow interchange format (DIF) language-a language for specifying DSP-oriented dataflow graphs-with constructs for supporting topological patterns, and we show how topological patterns can be effective in various aspects of embedded signal processing design flows using specific application examples.
Research output: Contribution to journal › Article › Scientific › peer-review
The purpose of this paper is to study the role of networking in the development and present situation of Finnish software companies. Although the target of interest of this study is Finland, the conclusions can also to some extent be applied to other countries with mature software industries. In Finland there is uniquely wide longitudinal material on the software business available; the software industry survey is an annual study targeted for the branch, which has already been repeated for 18 consecutive years. The study shows that networking has been a key trend in the industry and also a driver for internationalization, but as it has not been identified very well in networking literature concerning the software industry, there is a clear need for further examination of software industry networks.
JUFOID=71106
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In the educational context, understanding the future is important for two reasons. First, we are educating people for future tasks, which need skills that are useful in the future. Secondly, educators have to be able to select the most promising tools and technologies to apply in their work. The problem is that there is no clear way to weigh the importance of the alternatives - what the real importance of a certain technology will be in the near future and especially in the long term. In our paper, we focus on analyzing selected technologies. Our approach applies the framework developed by the authors. The promising technologies are reviewed by a systematic literature study, focusing on and restricted to the information and communication technology (ICT) sector. The findings are classified according to their importance and the time span of their effectiveness. The question we answer is What should every educator know about changes in technology?
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In business intelligence, reporting is perceived by users as the most important area. Here, we present a case study of data integration for reporting within the World Health Organization (WHO). WHO produces Communicable Disease Epidemiological Profiles for emergency affected countries. Given the nature of emergencies, the production of these reports should be timely. In order to automate the production of the reports, we have introduced a method of integrating data from multiple sources by using the RDF (Resource Description Framework) format. The model of the data is described using an RDF ontology, making validation of the data from multiple sources possible. However, since RDF is highly technical, we have designed a graphical tool for the end user. The tool can be used to configure the data sources of a given report. After this, data for the report is generated from the sources. Finally, templates are used to generate the reports.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In the past ten years, the Web has become a dominant deployment environment for new software systems and applications. In view of its current popularity, it is easy to forget that only 10-15 years ago hardly any developer would write serious software applications for the Web. Today, the use of the web browser as a software platform is commonplace, and JavaScript has become one of the most popular programming languages in the world. In this paper we revisit some predictions that were made over ten years ago when the Lively Kernel project was started back in 2006. Ten years later, most of the elements of the original vision have been fulfilled, although not entirely in the fashion we originally envisioned. We look back at the Lively Kernel vision, reflecting our original goals to the state of the art in web programming today.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Grounded theory method (GTM) has become popular in the information systems (IS) field despite multiple interpretations and disputes about its use and usefulness. This paper analyzes how IS researchers collaborate during the GTM process and how they report on the research process. We analyze a sample of papers from the AIS Senior Scholars’ basket of eight that use GTM as their research method to understand how researchers report collaboration in GTM research. We then draw from the previous literature and our own GTM research experiences to illustrate different alternatives of performing collaboration in GTM tasks and their pros and cons in order to help other GTM researchers. We highlight potential issues that arise from different epistemological and ontological stances and provide guidance and examples of how to avoid these issues and how to document the research process.
Research output: Contribution to journal › Article › Scientific › peer-review
Having a large number of applications in the marketplace is considered a critical success factor for software ecosystems. The number of applications has been claimed to determine which ecosystems holds the greatest competitive advantage and will eventually dominate the market. This paper investigates the influence of developer multi-homing (i.e., participating in more than one ecosystem) in three leading mobile application ecosystems. Our results show that when regarded as a whole, mobile application ecosystems are single-homing markets. The results further show that 3% of all developers generate more than 80% of installed applications and that multi-homing is common among these developers. Finally, we demonstrate that the most installed content actually comprises only a small number of the potential value propositions. The results thus imply that attracting and maintaining developers of superstar applications is more critical for the survival of a mobile application ecosystem than the overall number of developers and applications. Hence, the mobile ecosystem is unlikely to become a monopoly. Since exclusive contracts between application developers and mobile application ecosystems are rare, multi-homing is a viable component of risk management and a publishing strategy. The study advances the theoretical understanding of the influence of multi-homing on competition in software ecosystems.
Research output: Contribution to journal › Article › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We live in a world of accelerating changes, where technology plays an important role as an enabler. Looking ahead means being prepared for these changes. Preparedness may be reactive - reacting to the situation at the moment something happens; proactive - being prepared in advance for a situation that may happen; or preactive - being able in advance to affect something that may happen in the future and how it happens. Forecasting the future helps us to be prepared for new situations. It is based on making predictions that are derived from understanding past and present data. Known data is organized in the form of trends and further extrapolated to cover the future. From the technical point of view, there are a variety of approaches for forecasting: algorithmic, simulation, statistical analysis etc. The methods used may be quantitative (future data is seen as a function of past data) or qualitative (subjective, based on the opinion or judgment of the target group used in the analysis). Technology is an essential part of education, both in supporting effective learning and as a content of teaching itself. As a result, every educator needs skills to analyze the future of relevant technologies. In this paper, we introduce a framework that can be used in analysis of the importance of technological changes in education and as a part of curricula. The approach is based on trend analysis and classification of the relevant technologies to take into account the time span of their effects in society. The question we answer in this paper is How can an educator analyze the consequences of technological changes in their work?.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The global rise in interest towards robotics and artificial intelligence is increasing the technology acceptance among companies. This further encourages manufacturing companies to invest more in robotics on their factory floor. A robot manipulator can be sufficiently mobile and dexterous to operate alongside a human as would any other colleague. However, a human-centric viewpoint is needed in the design of the work cell to provide optimal working conditions for humans and thereby enhance employee performance. We identified a set of factors required for human comfort during cooperation with robots. These factors were divided into two main groups: mental and physical. Both mental and physical factors were based on scientific work reviews, robotics standards, and recognized human factors via a case study. These factors together are the basis for a comfort zone concept in human-robot collaboration. This concept forms design principles for developing the physical work environment of the future.
jufoid=84293
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Numerous users of social networking sites and services (SNS) suffer from technostress and its various strains that hinder well-being. Despite a growing research interest on technostress, the extant studies have not explained what kinds of various strains can SNS use create and how can these strains be traced back to different stressors. To address this gap in research, we employed a qualitative approach by narrative interviews. As a contribution, our findings introduce four SNS strains (concentration problems, sleep problems, identity problems, and social relation problems) and explain how they link with different underlying SNS stressors. As practical implications, the findings of this study can help technostressed users to identify their SNS strains, understand how they are created, and increase their possibilities to avoid the strains in the future.
jufoid=71106
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Universities are still mainly preparing students for the world, where 'do something useful', i.e. 'do something with your hands' was the main principle and work was done during strictly regulated time. But world has changed and traditional areas of human activity (what also are the main target in University courses) are rapidly diminishing. More important have become virtual products - computer programs, mobile apps, social networks, new types of digital currencies, IOT (voice in your bathroom suggesting to buy the next model of Alexa), video games, interactive TV, virtual reality etc. Most of these new areas are not present in current curricula and there are problems with involving them in curricula - (working) students know (some aspects of) these areas better than many of university teachers, since corresponding knowledge is not yet present in textbooks - it is present only on Internet. The Internet strongly influences both what we teach and how we teach.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In task-based information searching, the task at hand is a central factor affecting information search. Task complexity, in particular, has been discovered to affect searching. In the present study, we shadowed the tasks of seven people working in city administration. The data consist of shadowing field notes, voice recordings, photographs and forms. We study, how task complexity affects information searching and information resource use. Task complexity was defined through the task performer's own experience (perceived task complexity) and her estimates of her a priori knowledge concerning the task. We analyzed the data both qualitatively and quantitatively, focusing on the links between task complexity and the use of information resources, information searching and problems encountered. We found that task complexity has a central but ambiguous relationship to task performance. The clearest differences were found between simple and complex tasks. In addition, perceived task complexity seems to affect the ways of performing the task more than a priori knowledge. The more complex a task is perceived, the more searches are performed and the more they concentrate on networked resources instead of information systems provided by the organization (SPOs). The use of resources on the task performer's PC and the SPOs decreases when complexity increases. In proportion, the use of networked resources and communication resources increases. The total number of information resources used is somewhat greater in complex and semi-complex tasks than in simple tasks; and each resource is used for a longer time on average. Our study shows that task context and especially task complexity seems to affect information searching and the selection of sources.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Evaluation is central in research and development of information retrieval (IR). In addition to designing and implementing new retrieval mechanisms, one must also show through rigorous evaluation that they are effective. A major focus in IR is IR mechanisms' capability of ranking relevant documents optimally for the users, given a query. Searching for information in practice involves searchers, however, and is highly interactive. When human searchers have been incorporated in evaluation studies, the results have of ten suggested that better ranking does not necessarily lead to better search task, or work task, performance. Therefore, it is not clear which system or interface features should be developed to improve the effectiveness of human task performance. In the present article, we focus on the evaluation of task-based information interaction (TBII). We give special emphasis to learning tasks to discuss TBII in more concrete terms. Information interaction is here understood as behavioral and cognitive activities related to task planning, searching information items, selecting between them, working with them, and synthesizing and reporting. These five generic activities contribute to task performance and outcome and can be supported by information systems. In an attempt toward task-based evaluation, we introduce program theory as the evaluation framework. Such evaluation can investigate whether a program consisting of TBII activities and tools works and how it works and, further, provides a causal description of program (in)effectiveness. Our goal in the present article is to structure TBII on the basis of the five generic activities and consider the evaluation of each activity using the program theory framework. Finally, we combine these activity-based program theories in an overall evaluation framework for TBII. Such an evaluation is complex due to the large number of factors affecting information interaction. Instead of presenting tested program theories, we illustrate how the evaluation of TBII should be accomplished using the program theory framework in the evaluation of systems and behaviors, and their interactions, comprehensively in context.
Research output: Contribution to journal › Article › Scientific › peer-review
The military applications of interference mitigation are numerous, with the most obvious application being suppressing the effects of adversarial jamming. However, jamming mitigation also becomes essential in scenarios where the host force's jammer and signal intelligence receiver are in close proximity to each other and the jammer inadvertently introduces interference in the receiver. In this paper, through experiments carried out in a laboratory environment, we demonstrate the feasibility of digitally mitigating non-stationary narrowband interference caused by a sweep jammer, while simultaneously retaining the ability to receive and detect signals from unmanned aerial vehicle (UAV) remote control systems that use frequency hopping in the jammed frequency band.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The problem of how to automatically provide a desired (required) visual quality in lossy compression of still images and video frames is considered in this paper. The quality can be measured based on different conventional and visual quality metrics. In this paper, we mainly employ human visual system (HVS) based metrics PSNR-HVS-M and MSSIM since both of them take into account several important peculiarities of HVS. To provide a desired visual quality with high accuracy, iterative image compression procedures are proposed and analyzed. An experimental study is performed for a large number of grayscale test images. We demonstrate that there exist several coders for which the number of iterations can be essentially decreased using a reasonable selection of the starting value and the variation interval for the parameter controlling compression (PCC). PCC values attained at the end of the iterative procedure may heavily depend upon the coder used and the complexity of the image. Similarly, the compression ratio also considerably depends on the above factors. We show that for some modern coders that take HVS into consideration it is possible to give practical recommendations on setting a fixed PCC to provide a desired visual quality in a non-iterative manner. The case when original images are corrupted by visible noise is also briefly studied.
Research output: Contribution to journal › Article › Scientific › peer-review
Research output: Contribution to journal › Special issue › Scientific › peer-review
Count data arises for example in bioinformatics or analysis of text documents represented as word count vectors. With several data sets available from related sources, exploiting their similarities by transfer learning can improve models compared to modeling sources independently. We introduce a Bayesian generative transfer learning model which represents similarity across document collections by sparse sharing of latent topics controlled by an Indian Buffet Process. Unlike Hierarchical Dirichlet Process based multi-task learning, our model decouples topic sharing probability from topic strength, making sharing of low-strength topics easier, and outperforms the HDP approach in experiments.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Knowledge management represents a key issue for both information systems' academics and practitioners, including those who have become disillusioned by actual results that fail to deliver on exaggerated promises and idealistic visions. Social software, a tremendous global success story, has prompted similarly high expectations regarding the ways in which organizations can improve their knowledge handling. But can these expectations be met, whether in academic research or the real world? The article seeks to identify current research trends and gaps, with a focus on social knowledge environments. The proposed research agenda features four focal challenges: semi-permeable organizations, social software in professional work settings, crowd knowledge, and cross-border knowledge management. Three solutions emerge as likely methods to address these challenges: design-oriented solutions, analytical solutions, and interdisciplinary dialogue.
Research output: Contribution to journal › Article › Scientific › peer-review
This empirical paper examines whether the age of software products can explain the turnaround between the release of security advisories and the publication vulnerability information. Building on the theoretical rationale of vulnerability life cycle modeling, this assertion is examined with an empirical sample that covers operating system releases from Microsoft and two Linux vendors. Estimation is carried out with a linear regression model. The results indicate that the age of the observed Microsoft products does not affect the turnaround times, and only feeble statistical relationships are present for the examined Linux releases. With this negative result, the paper contributes to the vulnerability life cycle modeling research by presenting and rejecting one theoretically motivated and previously unexplored question. The rejection is also a positive result; there is no reason for users to fear that the turnaround times would significantly lengthen as operating system releases age.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In Global Software Development (GSD), the additional complexity caused by global distance requires processes to ease collaboration difficulties, reduce communication overhead, and improve control. How development tasks are broken down, shared and prioritized is key to project success. While the related literature provides some support for architects involved in GSD, guidelines are far from complete. This paper presents a GSD Architectural Practice Framework reflecting the views of software architects, all of whom are working in a distributed setting. In-depth interviews with architects from seven different GSD organizations revealed a complex set of challenges and practices. We found that designing software for distributed teams requires careful selection of practices that support understanding and adherence to defined architectural plans across sites. Teams used Scrum which aided communication, and Continuous Integration which helped solve synchronization issues. However, teams deviated from the design, causing conflicts. Furthermore, there needs to be a balance between the self-organizing Scrum team methodology and the need to impose architectural design decisions across distributed sites. The research presented provides an enhanced understanding of architectural practices in GSD companies. Our GSD Architectural Practice Framework gives practitioners a cohesive set of warnings, which for the most part, are matched by recommendations.
Research output: Contribution to journal › Article › Scientific › peer-review
Understanding and structuring the use of social software by scientists is of high importance in modern research and education - new ways of cooperation and knowledge sharing leads to new ways of work for researchers in both, higher education and enterprises. The possibilities of social networking services provides means for open discourse and offers easier ways to make scientific and educational resources available to the knowledge community. Within this paper, we create a research model and study knowledge sharing and technology acceptance related influence factors to share knowledge in the form of artefacts. These artefacts consist of open science and open educational resources. With our study we will validate the model of sharing influences and understand which factors are most relevant for scientists in IS discipline to share scientific and educational information through social networking services. Through the research, an improved understanding for the use of social software for globally distributed and open scientific communication is obtained.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Contribution to journal › Review Article › Scientific › peer-review
As the Internet of Vehicles matures and acquires its social flavor, novel wireless connectivity enablers are being demanded for reliable data transfer in high-rate applications. The recently ratified New Radio communications technology operates in millimeter-wave (mmWave) spectrum bands and offers sufficient capacity for bandwidth-hungry services. However, seamless operation over mmWave is difficult to maintain on the move, since such extremely high frequency radio links are susceptible to unexpected blockage by various obstacles, including vehicle bodies. As a result, proactive mode selection, that is, migration from infrastructure- to vehicle-based connections and back, is becoming vital to avoid blockage situations. Fortunately, the very social structure of interactions between the neighboring smart cars and their passengers may be leveraged to improve session continuity by relaying data via proximate vehicles. This paper conceptualizes the socially inspired relaying scenarios, conducts underlying mathematical analysis, continues with a detailed 3-D modeling to facilitate proactive mode selection, and concludes by discussing a practical prototype of a vehicular mmWave platform.
Research output: Contribution to journal › Article › Scientific › peer-review
Molecular communication holds the promise to enable communication between nanomachines with a view to increasing their functionalities and opening up new possible applications. Due to some of the biological properties, bacteria have been proposed as a possible information carrier for molecular communication, and the corresponding communication networks are known as bacterial nanonetworks. The biological properties include the ability for bacteria to mobilize between locations and carry the information encoded in deoxyribonucleic acid molecules. However, similar to most organisms, bacteria have complex social properties that govern their colony. These social characteristics enable the bacteria to evolve through various fluctuating environmental conditions by utilizing cooperative and non-cooperative behaviors. This article provides an overview of the different types of cooperative and non-cooperative social behavior of bacteria. The challenges (due to non-cooperation) and the opportunities (due to cooperation) these behaviors can bring to the reliability of communication in bacterial nanonetworks are also discussed. Finally, simulation results on the impact of bacterial cooperative social behavior on the end-to-end reliability of a single-link bacterial nanonetwork are presented. The article concludes by highlighting the potential future research opportunities in this emerging field.
Research output: Contribution to journal › Article › Scientific › peer-review
The prospects of the inband full-duplex (IBFD) technology are praised in non-military communications as it allows each radio to simultaneously transmit and receive (STAR) on the same frequencies enabling, e.g., enhanced spectral efficiency. Likewise, future defense forces may significantly benefit from the concept, because a military full-duplex radio (MFDR) would be capable of simultaneous integrated tactical communication and electronic warfare operations as opposed to the ordinary time- or frequency-division half-duplex radios currently used in all military applications. This study considers one particular application, where the MFDR performs jamming against an opponent's radio control (RC) system while simultaneously monitoring RC transmissions and/or receiving data over the air from an allied communication transmitter. The generic RC system can represent particularly, e.g., one pertaining to multicopter drones or roadside bombs. Specifically, this paper presents outcomes from recent experiments that are carried out outdoors while earlier indoor results are also revisited for reference. In conclusion, the results demonstrate that MFDRs can be viably utilized for RC signal detection purposes despite the residual self-interference due to jamming and imperfect cancellation.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The SiMPE workshop series started in 2006 [2] with the goal of enabling speech processing on mobile and embedded devices to meet the challenges of pervasive environments (such as noise) and leveraging the context they offer (such as location). SiMPE 2010 and 2011 brought together researchers from the speech and the HCI communities. Multimodality got more attention in SiMPE 2008 than it had received in the previous years. In SiMPE 2007, the focus was on developing regions. Speech User interaction in cars was a focus area in 2009. With SiMPE 2012, the 7th in the series, we hope to explore the area of speech along with sound. When using the mobile in an eyes-free manner, it is natural and convenient to hear about notifications and events. The arrival of an SMS has used a very simple sound based notification for a long time now. The technologies underlying speech processing and sound processing are quite different and these communities have been working mostly independent of each other. And yet, for multimodal interactions on the mobile, it is perhaps natural to ask whether and how speech and sound can be mixed and used more effectively and naturally.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Purpose – The purpose of this paper is to discuss the ways in which information acts as a commodity in massively multiplayer online role-playing games (MMORPGs), and how players pay for items and services with information practices. Design/methodology/approach – Through meta-theoretical analysis of the game environment as a set of information systems, one of retrieval and one social, the paper shows how players’ information practices influence their access to game content, organizational status and relationship to real-money trade. Findings – By showing how information trading functions in MMORPGs, the paper displays the importance of information access for play, the efficiency of real money trade and the significance of information practice -based services as a relatively regular form of payment in virtual worlds. Players furthermore shown to contribute to the information economy of the game with the way in which they decide not to share some information, so as to prevent others from a loss of game content value due to spoilers. Originality/value – The subject, despite the popularity of online games, has been severely understudied within library and information science. The paper contributes to that line of research, by showing how games function as information systems, and by explaining how they, as environments and contexts, influence and are influenced by information practices.
Research output: Contribution to journal › Article › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Professional
Current search engines offer limited assistance for exploration and information discovery in complex search tasks. Instead, users are distracted by the need to focus their cognitive efforts on finding navigation cues, rather than selecting relevant information. Interactive intent modeling enhances the human information exploration capacity through computational modeling, visualized for interaction. Interactive intent modeling has been shown to increase task-level information seeking performance by up to 100%. In this demonstration, we showcase SciNet, a system implementing interactive intent modeling on top of a scientific article database of over 60 million documents.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Since September 2015 at least two major crises have emerged where major industrial companies producing consumer products have been involved. In September 2015 diesel cars manufactured by Volkswagen turned out to be equipped with cheating software that caused NO2 and other emission values to be reduced to acceptable levels while tested from the real, unacceptable values in normal use. In August 2016 reports began to appear that the battery of a new smart phone produced by Samsung, Galaxy Note7, could begin to burn, or even explode, while the device was on. In Nov. 2016 also 34 washing machine models were reported to have caused damages due to disintegration. In all cases, the companies have experienced substantial financial losses, their shares have lost value, and their reputation has suffered among consumers and other stakeholders. In this paper, we study the commonalities and differences in the crisis management strategies of the companies, mostly concentrating on the crisis communication aspects. We draw on Situational Crisis Communication Theory (SCCT). The communication behaviour of the companies and various stakeholders during crisis is performed by investigating the official web sites of the companies and communication in Twitter and Facebook on their own accounts. We also collected streaming data from Twitter where Samsung and the troubled smart phone or washing machines were mentioned. For VW we also collected streaming data where the emission scandal or its ramifications were mentioned and performed several analyses, including sentiment analysis.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Mobile application ecosystems have growth rapidly in the past few years. Increasing number of startups and established developers are alike offering their products in different marketplaces such as Android Market and Apple App Store. In this paper, we are studying revenue models used in Android Market. For analysis, we gathered the data of 351,601 applications from their public pages at the marketplace. From these, a random sample of 100 applications was used in a qualitative study of revenue streams. The results indicate that a part of the marketplace can be explained with traditional models but free applications use complex revenue models. Basing on the qualitative analysis, we identified four general business strategy categories for further studies.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Clustering-based Discriminant Analysis (CDA) is a well-known technique for supervised feature extraction and dimensionality reduction. CDA determines an optimal discriminant subspace for linear data projection based on the assumptions of normal subclass distributions and subclass representation by using the mean subclass vector. However, in several cases, there might be other subclass representative vectors that could be more discriminative, compared to the mean subclass vectors. In this paper we propose an optimization scheme aiming at determining the optimal subclass representation for CDA-based data projection. The proposed optimization scheme has been evaluated on standard classification problems, as well as on two publicly available human action recognition databases providing enhanced class discrimination, compared to the standard CDA approach.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Motivated by the unprecedented penetration of mobile communications technology, this work carefully brings into perspective the challenges related to heterogeneous communications and offloaded computation operating in cases of fault-tolerant computation, computing, and caching. We specifically focus on the emerging augmented reality applications that require reliable delegation of the computing and caching functionality to proximate resource-rich devices. The corresponding mathematical model proposed in this work becomes of value to assess system-level reliability in cases where one or more nearby collaborating nodes become temporarily unavailable. Our produced analytical and simulation results corroborate the asymptotic insensitivity of the stationary reliability of the system in question (under the "fast" recovery of its elements) to the type of the "repair" time distribution, thus supporting the fault-tolerant system operation.
Research output: Contribution to journal › Article › Scientific › peer-review
The upcoming Reconfigurable Video Coding (RVC) standard from MPEG (ISO / IEC SC29WG11) defines a library of coding tools to specify existing or new compressed video formats and decoders. The coding tool library has been written in a dataflow/actor-oriented language named CAL. Each coding tool (actor) can be represented with an extended finite state machine and the data communication between the tools are described as dataflow graphs. This paper proposes an approach to model the CAL actor network with Parameterized Synchronous Data Flow and to derive a quasi-static multiprocessor execution schedule for the system. In addition to proposing a scheduling approach for RVC, an extension to the well-known permutation flow shop scheduling problem that enables rapid run-time scheduling of RVC tasks, is introduced.
Research output: Contribution to journal › Article › Scientific › peer-review
Insofar as our cultural heritage (CH) has become not only an economic resource but a key element in defining our identity, its accurate and flexible documentation has emerged as an essential task. The generation of 3D information with physical and functional characteristics is now possible through the connection of survey data with Historical Building Information Modeling (HBIM). However, few studies have focused on the semantic enrichment process of models based on point clouds, especially on the field of cultural heritage. These singularities make the conversion of point cloud to 'as-built' HBIM an expensive process from the mathematical and computational viewpoint. At present, there is no software that guarantees automatic and efficient data conversion in architectural or urban contexts. The ongoing research 'Documenting and Visualizing Industrial Heritage' is conducted by the School of Architecture, Tampere University of Technology, Finland based on an Open Notebook Reserarch Model. It is focused on advance the knowledge of digital operating environments for the representation and management of historical buildings and sites. On the one hand, the research is advancing in three-dimensional 'as-built' modeling based on remote sensing data, while on the other hand is aiming to incorporate more qualitative information based on concepts of production and management in the lifecycle of the built environment. The purpose of this presentation is to discuss the different approaches to date on the HBIM generation chain: from 3D point cloud data collection to semantically enriched parametric models.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Context: Unhandled code exceptions are often the cause of a drop in the number of users. In the highly competitive market of Android apps, users commonly stop using applications when they find some problem generated by unhandled exceptions. This is often reflected in a negative comment in the Google Play Store and developers are usually not able to reproduce the issue reported by the end users because of a lack of information. Objective: In this work, we present an industrial case study aimed at prioritizing the removal of bugs related to uncaught exceptions. Therefore, we (1) analyzed crash reports of an Android application developed by a public transportation company, (2) classified uncaught exceptions that caused the crashes; (3) prioritized the exceptions according to their impact on users. Results: The analysis of the exceptions showed that seven exceptions generated 70% of the overall errors and that it was possible to solve more than 50% of the exceptions-related issues by fixing just six Java classes. Moreover, as a side result, we discovered that the exceptions were highly correlated with two code smells, namely “Spaghetti Code” and “Swiss Army Knife”. The results of this study helped the company understand how to better focus their limited maintenance effort. Additionally, the adopted process can be beneficial for any Android developer in understanding how to prioritize the maintenance effort.
EXT="Lenarduzzi, Valentina"
jufoid=71106
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper presents an integrated self-aware computing model mitigating the power dissipation of a heterogeneous reconfigurable multicore architecture by dynamically scaling the operating frequency of each core. The power mitigation is achieved by equalizing the performance of all the cores for an uninterrupted exchange of data. The multicore platform consists of heterogeneous Coarse-Grained Reconfigurable Arrays (CGRAs) of application-specific sizes and a Reduced Instruction-Set Computing (RISC) core. The CGRAs and the RISC core are integrated with each other over a Network-on-Chip (NoC) of six nodes arranged in a topology of two rows and three columns. The RISC core constantly monitors and controls the performance of each CGRA accelerator by adjusting the operating frequencies unless the performance of all the CGRAs is optimally balanced over the platform. The CGRA cores on the platform are processing some of the most computationally-intensive signal processing algorithms while the RISC core establishes packet based synchronization between the cores for computation and communication. All the cores can access each other’s computational and memory resources while processing the kernels simultaneously and independently of each other. Besides general-purpose processing and overall platform supervision, the RISC processor manages performance equalization among all the cores which mitigates the overall dynamic power dissipation by 20.7 % for a proof-of-concept test.
Research output: Contribution to journal › Article › Scientific › peer-review
Regarding sustainable development, there is a growing need to gather more and more various kinds of measurement, space, and consumption information about property. The necessity for property condition measurement is apparent and the appropriate circumstances, such as indoor air quality and suitable temperature, have an essential influence on comfort and welfare at work and, at the same time, have significance in terms of energy efficiency. This paper presents a portable prototype system for property condition measurement. The objective was to generate a reliable system that improves the quality and also the visual presentation of the collected data. The paper presents the components of the system and the technology utilized to implement the system. The results of piloting in a real-life environment, where particular focus was placed on both controlling energy efficiency and well-being at work, are also presented.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Applications such as robot control and wireless communication require planning under uncertainty. Partially observable Markov decision processes (POMDPs) plan policies for single agents under uncertainty and their decentralized versions (DEC-POMDPs) find a policy for multiple agents. The policy in infinite-horizon POMDP and DEC-POMDP problems has been represented as finite state controllers (FSCs). We introduce a novel class of periodic FSCs, composed of layers connected only to the previous and next layer. Our periodic FSC method finds a deterministic finite-horizon policy and converts it to an initial periodic infinitehorizon policy. This policy is optimized by a new infinite-horizon algorithm to yield deterministic periodic policies, and by a new expectation maximization algorithm to yield stochastic periodic policies. Our method yields better results than earlier planningmethods and can compute larger solutions than with regular FSCs.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Web performance optimization tries to minimize the time in which web pages are downloaded and displayed on the web browser. It also means that the sizes of website resources are usually minimized. By optimizing their websites, organizations can verify the quality of response times on their websites. This increases visitor loyalty and user satisfaction. A fast website is also important for search engine optimization. Minimized resources also cut the energy consumption of the Internet. In spite of the importance of optimization, there has not been so much research work to find out how much the comprehensive optimization of a website can reduce load times and the sizes of web resources. This study presents the results related to an optimization work where all the resources of the website were optimized. The results obtained were very significant. The download size of the front page was reduced by a total of about 80 percent and the downloading time about 60 percent. The server can now handle more than three times as much concurrent users as earlier.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The block-based multi-metric fusion (BMMF) is one of the state-of-the-art perceptual image quality assessment (IQA) schemes. With this scheme, image quality is analyzed in a block-by-block fashion according to the block content type (i.e. smooth, edge and texture blocks) and the distortion type. Then, a suitable IQA metric is adopted to evaluate the quality of each block. Various fusion strategies to combine the QA scores of all blocks are discussed in this work. Specifically, factors such as quality scores distribution and the spatial distribution of each block are examined using statistics methods. Finally, we compare the performance of various fusion strategies based on the popular TID database. © 2012 APSIPA.
Contribution: organisation=sgn,FACT1=1<br/>Portfolio EDEND: 2013-03-29
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Chapter › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Condition pre-enforcement is one of the known methods for rights adaptation. Related to the integration of the rights exporting process, we identify issues introduced by condition pre-enforcement and potential risks of granting unexpected rights when exporting rights back and forth. We propose a solution to these problems in a form of a new algorithm called Passive Condition Pre-enforcement (PCP), and discuss the impact of PCP to the existing process of rights exporting.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The energy requirements of cities' inhabitants have grown during the last decade. Recent studies justify the necessity of reducing the energy consumption/emissions in cities. The present paper gives an overview of the factors affecting the energy consumption of the citizens based on studies conducted in cities across the globe. The studies cover all the factors that affect citizens' mobility choice that at the end, affects in the same way their final energy consumption. The results of the review are being used to support authorities in mobility decisions in order to achieve a sustainable transport sector in smart cities.
AUX=ase,"Mantilla R., M. Fernanda"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In recent work, a graphical modeling construct called "topological patterns" has been shown to enable concise representation and direct analysis of repetitive dataflow graph sub-structures in the context of design methods and tools for digital signal processing systems (Sane et al. 2010). In this paper, we present a formal design method for specifying topological patterns and deriving parameterized schedules from such patterns based on a novel schedule model called the scalable schedule tree. The approach represents an important class of parameterized schedule structures in a form that is intuitive for representation and efficient for code generation. Through application case studies involving image processing and wireless communications, we demonstrate our methods for topological pattern representation, scalable schedule tree derivation, and associated dataflow graph code generation.
Research output: Contribution to journal › Article › Scientific › peer-review
Digital predistortion (DPD) is a widely adopted baseband processing technique in current radio transmitters. While DPD can effectively suppress unwanted spurious spectrum emissions stemming from imperfections of analog RF and baseband electronics, it also introduces extra processing complexity and poses challenges on efficient and flexible implementations, especially for mobile cellular transmitters, considering their limited computing power compared to basestations. In this paper, we present high data rate implementations of broadband DPD on modern embedded processors, such as mobile GPU and multicore CPU, by taking advantage of emerging parallel computing techniques for exploiting their computing resources. We further verify the suppression effect of DPD experimentally on real radio hardware platforms. Performance evaluation results of our DPD design demonstrate the high efficacy of modern general purpose mobile processors on accelerating DPD processing for a mobile transmitter.
Research output: Contribution to journal › Article › Scientific › peer-review
Managing master data as an organization-wide function enforces changes in responsibilities and established ways of working. These changes cause tensions in the organization and can result in conflicts. Understanding these tensions and mechanisms helps the organization to manage the change more effectively. The tensions and conflicts are studied through the theory of paradox. The object of this paper is to identify paradoxes in a Master Data Management (MDM) development process and the factors that contribute to the emergence of these conflicts. Altogether thirteen MDM specific paradoxes were identified and factors leading to them were presented. Paradoxes were grouped into categories that represent the organization's core activities to understand how tensions are embedded within the organization, and how they are experienced. Five paradoxes were observed more closely to illustrate the circumstances they appear. Working through the tensions also sheds light on the question of how these paradoxes should be managed. This example illustrates how problems emerge as dilemmas and evolve into paradoxes.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Professional
Video coding technology in the last 20 years has evolved producing a variety of different and complex algorithms and coding standards. So far the specification of such standards, and of the algorithms that build them, has been done case by case providing monolithic textual and reference software specifications in different forms and programming languages. However, very little attention has been given to provide a specification formalism that explicitly presents common components between standards, and the incremental modifications of such monolithic standards. The MPEG Reconfigurable Video Coding (RVC) framework is a new ISO standard currently under its final stage of standardization, aiming at providing video codec specifications at the level of library components instead of monolithic algorithms. The new concept is to be able to specify a decoder of an existing standard or a completely new configuration that may better satisfy application-specific constraints by selecting standard components from a library of standard coding algorithms. The possibility of dynamic configuration and reconfiguration of codecs also requires new methodologies and new tools for describing the new bitstream syntaxes and the parsers of such new codecs. The RVC framework is based on the usage of a new actor/ dataflow oriented language called CAL for the specification of the standard library and instantiation of the RVC decoder model. This language has been specifically designed for modeling complex signal processing systems. CAL dataflow models expose the intrinsic concurrency of the algorithms by employing the notions of actor programming and dataflow. The paper gives an overview of the concepts and technologies building the standard RVC framework and the non standard tools supporting the RVC model from the instantiation and simulation of the CAL model to software and/or hardware code synthesis.
Research output: Contribution to journal › Article › Scientific › peer-review
Outliers are samples that are generated by different mechanisms from other normal data samples. Graphs, in particular social network graphs, may contain nodes and edges that are made by scammers, malicious programs or mistakenly by normal users. Detecting outlier nodes and edges is important for data mining and graph analytics. However, previous research in the field has merely focused on detecting outlier nodes. In this article, we study the properties of edges and propose effective outlier edge detection algorithm. The proposed algorithms are inspired by community structures that are very common in social networks. We found that the graph structure around an edge holds critical information for determining the authenticity of the edge. We evaluated the proposed algorithms by injecting outlier edges into some real-world graph data. Experiment results show that the proposed algorithms can effectively detect outlier edges. In particular, the algorithm based on the Preferential Attachment Random Graph Generation model consistently gives good performance regardless of the test graph data. More important, by analyzing the authenticity of the edges in a graph, we are able to reveal underlying structure and properties of a graph. Thus, the proposed algorithms are not limited in the area of outlier edge detection. We demonstrate three different applications that benefit from the proposed algorithms: (1) a preprocessing tool that improves the performance of graph clustering algorithms; (2) an outlier node detection algorithm; and (3) a novel noisy data clustering algorithm. These applications show the great potential of the proposed outlier edge detection techniques. They also address the importance of analyzing the edges in graph mining—a topic that has been mostly neglected by researchers.
EXT="Kiranyaz, Serkan"
Research output: Contribution to journal › Article › Scientific › peer-review
Multirate filter banks can be implemented efficiently using fast-convolution (FC) processing. The main advantage of the FC filter banks (FC-FB) compared with the conventional polyphase implementations is their increased flexibility, that is, the number of channels, their bandwidths, and the center frequencies can be independently selected. In this paper, an approach to optimize the FC-FBs is proposed. First, a subband representation of the FC-FB is derived. Then, the optimization problems are formulated with the aid of the subband model. Finally, these problems are conveniently solved with the aid of a general nonlinear optimization algorithm. Several examples are included to demonstrate the proposed overall design scheme as well as to illustrate the efficiency and the flexibility of the resulting FC-FB.
Research output: Contribution to journal › Article › Scientific › peer-review
Industrial information systems record and store data about the status and use of the complex underlying production systems and processes. These data can be analyzed to improve existing, and innovate new products, processes, and services. This work focuses on a relatively unexplored area of industrial data analytics - understanding of end-user behaviors and their implications to the design, implementation, training and servicing of industrial systems. We report the initial findings from a requirements gathering workshop conducted with industry participants to identify the expected opportunities and goals with logged usage data and related needs to support the aims. Our key contributions include a characterization of the types of data that need to be collected and visualized, how these data can be used to understand product usage, description of the business purposes the information can be used for, and experience goals to guide the development of a novel usage data analytics tool. Interesting future research direction could include the privacy issues related to using logged usage data when limited number of users are logged.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Professional
The social impact of games to players and developers, software quality and game labour are cornerstones of a software game production model. Openness is, naturally, a significant factor for games evolution, overall acceptance and success. The paper authors focus on exploring these issues within the proprietary (closed) and non-proprietary (free/open) source types of software development. The authors identify developmental strengths and weaknesses for the (i) game evolution; (ii) game developers and (iii) game players. The main focus of the paper is on development that is done after the first release of a game with the help of add-ons. Concluding, there are suggestions for a more open and collaborative thinking and acting process model of game evolution that could benefit both types of development and all stakeholders involved. This process can integrate quality features from open and traditional development suitable for game construction.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In recent years, several countries have placed strong emphasis on openness, especially open data, which can be shared and further processed into various applications. Based on studies, the majority of open data providers are government organizations. This study presents two cases in which the data providers are companies. The cases are analyzed using a framework for open data based business models derived from the literature and several case studies. The analysis focuses on the beginning of the data value chain. As a result, the study highlights the role of data producers in the ecosystem, which has not been the focus in current frameworks.
INT=tie,"Mäkinen, T."
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Open Source Software (OSS) communities do not often invest in marketing strategies to promote their products in a competitive way. Even the home pages of the web portals of well-known OSS products show technicalities and details that are not relevant for a fast and effective evaluation of the product's qualities. So, final users and even developers who are interested in evaluating and potentially adopting an OSS product are often negatively impressed by the quality perception they have from the web portal of the product and turn to proprietary software solutions or fail to adopt OSS that may be useful in their activities. In this paper, we define OP2A, an evaluation model and we derive a checklist that OSS developers and web masters can use to design (or improve) their web portals with all the contents that are expected to be of interest for OSS final users. We exemplify the use of the model by applying it to the Apache Tomcat web portal and we apply the model to 47 web sites of well-known OSS products to highlight the current deficiencies that characterize these web portals.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Open Source Software (OSS) communities do not often invest in marketing strategies to promote their products in a competitive way. Even the home pages of the web portals of well-known OSS products show technicalities and details that are not relevant for a fast and effective evaluation of the product's qualities. So, final users and even developers, who are interested in evaluating and potentially adopting an OSS product, are often negatively impressed by the quality perception they have from the web portal of the product and turn to proprietary software solutions or fail to adopt OSS that may be useful in their activities. In this paper, we define an evaluation model and we derive a checklist that OSS developers and web masters can use to design their web portals with all the contents that are expected to be of interest for OSS final users. We exemplify the use of the model by applying it to the Apache Tomcat web portal and we apply the model to 22 well-known OSS portals.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In-band full-duplex (FD) operation can be regarded as one of the greatest discoveries in civilian/commercial wireless communications so far in this century. The concept is significant because it can as much as double the spectral efficiency of wireless data transmission by exploiting the new-found capability for simultaneous transmission and reception (STAR) that is facilitated by advanced self-interference cancellation (SIC) techniques. As the first of its kind, this paper surveys the prospects of exploiting the emerging FD radio technology in military communication applications as well. In addition to spectrally efficient two-way data transmission, the STAR capability could give a major technical advantage for armed forces by allowing their radio transceivers to conduct electronic warfare at the same time when they are also receiving or transmitting information signals at the same frequency band. After providing a detailed introduction to FD transceiver architectures and SIC requirements in military communications, this paper outlines and analyzes some potential defensive and offensive applications of the STAR capability.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Physical location of data in cloud storage is an increasingly urgent problem. In a short time, it has evolved from the concern of a few regulated businesses to an important consideration for many cloud storage users. One of the characteristics of cloud storage is fluid transfer of data both within and among the data centres of a cloud provider. However, this has weakened the guarantees with respect to control over data replicas, protection of data in transit and physical location of data. This paper addresses the lack of reliable solutions for data placement control in cloud storage systems. We analyse the currently available solutions and identify their shortcomings. Furthermore, we describe a high-level architecture for a trusted, geolocation-based mechanism for data placement control in distributed cloud storage systems, which are the basis of an on-going work to define the detailed protocol and a prototype of such a solution. This mechanism aims to provide granular control over the capabilities of tenants to access data placed on geographically dispersed storage units comprising the cloud storage.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
An LTS operator can be constructed from a set of LTS operators up to an equivalence if and only if there is an LTS expression that only contains operators from the set and whose result is equivalent to the result of the operator. In this publication this idea is made precise in the context where each LTS has an alphabet of its own and the operators may depend on the alphabets. Then the extent to which LTS operators are constructible is studied. Most, if not all, established LTS operators have the property that each trace of the result arises from the execution of no more than one trace of each of its argument LTSs, and similarly for infinite traces. All LTS operators that have this property and satisfy some other rather weak regularity properties can be constructed from parallel composition and hiding up to the equivalence that compares the alphabets, traces, and infinite traces of the LTSs. Furthermore, a collection of other miscellaneous constructibility and unconstructibility results is presented.
Research output: Contribution to journal › Article › Scientific › peer-review
Web crawlers are essential to many Web applications, such as Web search engines, Web archives, and Web directories, which maintain Web pages in their local repositories. In this paper, we study the problem of crawl scheduling that biases crawl ordering toward important pages. We propose a set of crawling algorithms for effective and efficient crawl ordering by prioritizing important pages with the well-known PageRank as the importance metric. In order to score URLs, the proposed algorithms utilize various features, including partial link structure, inter-host links, page titles, and topic relevance. We conduct a large-scale experiment using publicly available data sets to examine the effect of each feature on crawl ordering and evaluate the performance of many algorithms. The experimental results verify the efficacy of our schemes. In particular, compared with the representative RankMass crawler, the FPR-title-host algorithm reduces computational overhead by a factor as great as three in running time while improving effectiveness by 5% in cumulative PageRank.
Research output: Contribution to journal › Article › Scientific › peer-review
While single-view human action recognition has attracted considerable research study in the last three decades, multi-view action recognition is, still, a less exploited field. This paper provides a comprehensive survey of multi-view human action recognition approaches. The approaches are reviewed following an application-based categorization: methods are categorized based on their ability to operate using a fixed or an arbitrary number of cameras. Finally, benchmark databases frequently used for evaluation of multi-view approaches are briefly described.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Visible light communication (VLC) is a recent proposed paradigm of optical wireless communication, in which the visible electromagnetic radiation is used for data transmission. The visible part of the spectrum occupies the frequency range from 400 THz to 800 THz, which is 10,000 times greater than the radio frequency (RF) band. Therefore, its exceptional characteristics render it a promising solution to support and complement traditional RF communication systems, and also overcome the currently witnessed scarcity of radio spectrum resources. To this end, in the last few years, there has been a rapid interest in multi-user processing techniques in VLC. Motivated by this, in this paper, we present a comprehensive and up-to-date survey on the integration of multiple-input multiple-output systems, multi-carrier modulations and multiple access techniques in the context of VLC.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We construct multidimensional interpolating tensor product multiresolution analyses (MRA's) of the function spaces C<inf>0</inf>(R<sup>n</sup>,K), K = R or K = C, consisting of real or complex valued functions on R<sup>n</sup> vanishing at infinity and the function spaces Cu(R<sup>n</sup>,K) consisting of bounded and uniformly continuous functions on R<sup>n</sup>. We also construct an interpolating dual MRA for both of these spaces. The theory of the tensor products of Banach spaces is used. We generalize the Besov space norm equivalence from the one-dimensional case to our n-dimensional construction.
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper, we present a novel method aiming at multidimensional sequence classification. We propose a novel sequence representation, based on its fuzzy distances from optimal representative signal instances, called statemes. We also propose a novel modified clustering discriminant analysis algorithm minimizing the adopted criterion with respect to both the data projection matrix and the class representation, leading to the optimal discriminant sequence class representation in a low-dimensional space, respectively. Based on this representation, simple classification algorithms, such as the nearest subclass centroid, provide high classification accuracy. A three step iterative optimization procedure for choosing statemes, optimal discriminant subspace and optimal sequence class representation in the final decision space is proposed. The classification procedure is fast and accurate. The proposed method has been tested on a wide variety of multidimensional sequence classification problems, including handwritten character recognition, time series classification and human activity recognition, providing very satisfactory classification results.
Research output: Contribution to journal › Article › Scientific › peer-review
The advent of academic social networking sites (ASNS) has offered an unprecedented opportunity for scholars to obtain peer support online. However, little is known about the characteristics that make questions and answers popular among scholars on ASNS. Focused on the statements embedded in questions and answers, this study strives to explore the precursors that motivate scholars to respond, such as reading, following, or recommending a question or an answer. We collected empirical data from ResearchGate and coded the data via the act4teams coding scheme. Our analysis revealed a threshold effect—when the length of question description is over circa 150 words, scholars would quickly lose interest and thus not read the description. In addition, we found that questions, including positive action-oriented statements, are more likely to entice subsequent reads from other scholars. Furthermore, scholars prefer to recommend an answer with positive procedural statements or negative action-oriented statements.
Research output: Contribution to journal › Article › Scientific › peer-review
The decreasing prices of monitoring equipment have vastly increased the opportunities to utilize local data, and data processing for wider global web-based monitoring purposes. The possible amount of data flowing though different levels can be huge. Now the question is how to handle this opportunity in both dynamic and secure way. The paper presents a new concept to manage data for monitoring through the Internet. The concept is based on the use of Arrowhead Framework (AF) and MIMOSA data model, and selected edge, and gateway devices together with cloud computing opportunities. The concept enables the flexible and secure orchestration of run-time data sources and the utilization of computational services for various process and condition monitoring needs.
EXT="Barna, Laurentiu"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
As the capabilities of Unmanned Aerial Systems (UASs) evolve, their novel and demanding applications emerge, which require improved capacity and reduced latency. Millimeter-wave (mmWave) connections are particularly attractive for UASs due to their predominantly line-of-sight regime and better signal locality. In this context, understanding the interactions between the environment, the flight dynamics, and the beam tracking capabilities is a challenge that has not been resolved by today's simulation environments. In this work, we develop the means to model these crucial considerations as well as provide the initial insights into the performance of mmWave-based UAS communications made available with the use of our proposed platform.
jufoid=57486
INT=elen,"Godbole, Tanmay Ram"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Cyber-attacks have grown in importance to become a matter of national security. A growing number of states and organisations around the world have been developing defensive and offensive capabilities for cyber warfare. Security criteria are important tools for defensive capabilities of critical communications and information systems (CIS). Various criteria have been developed for designing, implementing and auditing CIS. However, the development of criteria is inadequately supported by currently available guidance. The relevant guidance is mostly related to criteria selection. The abstraction level of the guidance is high. This may lead to inefficient criteria development work. In addition, the resulting criteria may not fully meet their goals. To ensure efficient criteria development, the guidance should be supported with concrete level implementation guidelines. This paper proposes a model for efficient development of security audit criteria. The model consists of criteria design goals and concrete implementation guidelines to achieve these goals. The model is based on the guidance given by ISACA and on the criteria development work by FICORA, the Finnish Communications Regulatory Authority. During the years 2008-2017, FICORA has actively participated in development and usage of three versions of Katakri, the Finnish national security audit criteria. The paper includes a case study that applies the model to existing security criteria. The case study covers a review of the criteria composed of the Finnish VAHTI-instructions. During the review, all supported design goals and implementation guidelines of the model were scrutinised. The results of the case study indicate that the model is useful for reviewing existing criteria. The rationale is twofold. First, several remarkable shortcomings were identified. Second, the identification process was time-efficient. The results also suggest that the model would be useful for criteria under development. Addressing the identified shortcomings during the development phase would have made the criteria more efficient, usable and understandable.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper presents a model-based design method and a corresponding new software tool, the HTGS Model-Based Engine (HMBE), for designing and implementing dataflow-based signal processing applications on multi-core architectures. HMBE provides complementary capabilities to HTGS (Hybrid Task Graph Scheduler), a recently-introduced software tool for implementing scalable workflows for high performance computing applications on compute nodes with high core counts and multiple GPUs. HMBE integrates model-based design approaches, founded on dataflow principles, with advanced design optimization techniques provided in HTGS. This integration contributes to (a) making the application of HTGS more systematic and less time consuming, (b) incorporating additional dataflow-based optimization capabilities with HTGS optimizations, and (c) automating significant parts of the HTGS-based design process using a principled approach. In this paper, we present HMBE with an emphasis on the model-based design approaches and the novel dynamic scheduling techniques that are developed as part of the tool. We demonstrate the utility of HMBE via two case studies: an image stitching application for large microscopy images and a background subtraction application for multispectral video streams.
Research output: Contribution to journal › Article › Scientific › peer-review
There are two simultaneous transformative changes occuring in Education: the use of mobile and tablet devices for accessing educational content, and the rise of the MOOCs. Happening independently and in parallel are significant advances in interaction technologies through smartphones and tablets, and the rise in the use of social-media and social-network analytics in several domains. Given the extent of personal context that is available on the mobile device, how can the education experience be personalised, made social, and tailored to the cultural context of the learner? The goal of this proposal is twofold: (a) To understand the usage, and student behaviour in this new environment (MOOCS and mobile devices) and (b) To design experiments and implement them to make these new tools more effective by tailoring them to the individual student's personal, social and cultural settings and preferences.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Today, the number of interconnected Internet of Things (IoT) devices is growing tremendously followed by an increase in the density of cellular base stations. This trend has an adverse effect on the power efficiency of communication, since each new infrastructure node requires a significant amount of energy. Numerous enablers are already in place to offload the scarce cellular spectrum, thus allowing utilization of more energy-efficient short-range radio technologies for user content dissemination, such as moving relay stations and network-assisted direct connectivity. In this work, we contribute a new mathematical framework aimed at analyzing the impact of network offloading on the probabilistic characteristics related to the quality of service and thus helping relieve the energy burden on infrastructure network deployments.
Research output: Contribution to journal › Article › Scientific › peer-review
Startups operate with small resources in time pressure. Thus, building minimal product versions to test and validate ideas has emerged as a way to avoid wasteful creation of complicated products which may be proven unsuccessful in the markets. Often, design of these early product versions needs to be done fast and with little advance information from end-users. In this paper we introduce the Minimum Viable User eXperience (MVUX) that aims at providing users a good enough user experience already in the early, minimal versions of the product. MVUX enables communication of the envisioned product value, gathering of meaningful feedback, and it can promote positive word of mouth. To understand what MVUX consists of, we conducted an interview study with 17 entrepreneurs from 12 small startups. The main elements of MVUX recognized are Attractiveness, Approachability, Professionalism, and Selling the Idea. We present the structured framework and elements’ contributing qualities.
jufoid=71106
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
To measure the impact of transport projects in smart cities can be expensive and time-consuming. One challenge in measuring the effect of these projects is that impacts are poorly quantified or are not always immediately tangible. Due to transport projects nature, it is often difficult to show results in short term because much of the effort is invested in changing attitudes and behaviour on the mobility choices of city inhabitants. This paper presents a methodology that was developed to evaluate and define city transport projects for increasing energy efficiency. The main objective of this methodology is to help city authorities to improve the energy efficiency of the city by defining strategies and taking actions in the transportation domain. In order to define it, a review of current methodologies for measuring the impact of energy efficiency projects was performed. The defined energy efficiency methodology provides standard structure to the evaluation process, making sure that each project is being evaluated against its own goals and as detailed as it is required to the level of investment. An implementation in a smart city of the first step of this methodology is included in order to evaluate the implementation phase of the defined process.
AUX=ase,"Mantilla R., M. Fernanda"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In the field of cryptography engineering, implementation-based attacks are a major concern due to their proven feasibility. Fault injection is one attack vector, nowadays a major research line. In this paper, we present how a memory tampering-based fault attack can be used to severely limit the output space of binary GCD based modular inversion algorithm implementations. We frame the proposed attack in the context of ECDSA showing how this approach allows recovering the private key from only one signature, independent of the key size. We analyze two memory tampering proposals, illustrating how this technique can be adapted to different implementations. Besides its application to ECDSA, it can be extended to other cryptographic schemes and countermeasures where binary GCD based modular inversion algorithms are employed. In addition, we describe how memory tampering-based fault attacks can be used to mount a previously proposed fault attack on scenarios that were initially discarded, showing the importance of including memory tampering attacks in the frameworks for analyzing fault attacks and their countermeasures.
Research output: Contribution to journal › Article › Scientific › peer-review
In recent years, parameterized dataflow has evolved as a useful framework for modeling synchronous and cyclo-static graphs in which arbitrary parameters can be changed dynamically. Parameterized dataflow has proven to have significant expressive power for managing dynamics of DSP applications in important ways. However, efficient hardware synthesis techniques for parameterized dataflow representations are lacking. This paper addresses this void; specifically, the paper investigates efficient field programmable gate array (FPGA)-based implementation of parameterized cyclo-static dataflow (PCSDF) graphs. We develop a scheduling technique for throughput-constrained minimization of dataflow buffering requirements when mapping PCSDF representations of DSP applications onto FPGAs. The proposed scheduling technique is integrated with an existing formal schedule model, called the generalized schedule tree, to reduce schedule cost. To demonstrate our new, hardware-oriented PCSDF scheduling technique, we have designed a real-time base station emulator prototype based on a subset of long-term evolution (LTE), which is a key cellular standard.
Research output: Contribution to journal › Article › Scientific › peer-review
The target of this article is to analyze the impact of transition from cellular frequency band i.e. 2.1 GHz to Millimeter Wave (mmWave) frequency band i.e. 28 GHz. A three dimensional ray tracing tool “sAGA” was used to evaluate the performance of the macro cellular network in urban/dense-urban area of the Helsinki city. A detailed analysis of user experience in terms of signal strength and signal quality for outdoor and indoor users is presented. Indoor users at different floors are separately studied in this paper. It is found that in spite of considering high system gain at 28 GHz the mean received signal power is reduced by almost 16.5 dB compared with transmission at 2.1 GHz. However, the SINR is marginally changed at higher frequency. Even with 200 MHz system bandwidth at 28 GHz, no substantial change is witnessed in signal quality for the outdoor and upper floor indoor users. However, the users at lower floors show some sign of degradation in received signal quality with 200 MHz bandwidth. Moreover, it is also emphasized that mobile operators should take benefit of un-utilized spectrum in the mmWave bands. In short, this paper highlights the potential and the gain of mmWave communications.
Research output: Contribution to journal › Article › Scientific › peer-review
Wireless sensor networks (WSNs) are being deployed at an escalating rate for various application fields. The ever growing number of application areas requires a diverse set of algorithms with disparate processing needs. WSNs also need to adapt to prevailing energy conditions and processing requirements. The preceding reasons rule out the use of a single fixed design. Instead, a general purpose design that can rapidly be adapted to different conditions and requirements is desired. In lieu of the traditional inflexible wireless sensor node consisting of a separate micro-controller, radio transceiver, sensor array and energy storage, we propose a unified rapidly reconfigurable miniature sensor node, implemented with a transport triggered architecture processor on a low-power Flash FPGA. To our knowledge, this is the first study of its kind. The proposed approach does not solely concentrate on energy efficiency but a high emphasis is also put on the ease of development perspective. Power consumption and silicon area usage comparison based on solutions implemented using our novel rapid design approach for wireless sensor nodes are performed. The comparison is performed between 16-bit fixed point, 16-bit floating point and 32-bit floating point implementations. The implemented processors and algorithms are intended for rolling bearing condition monitoring, but can be fully extended for other applications as well.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper presents a lossless compression method performing separately the compression of the vessels and of the remaining part of eye fundus in retinal images. Retinal images contain valuable information sources for several distinct medical diagnosis tasks, where the features of interest can be e.g. the cotton wool spots in the eye fundus, or the volume of the vessels over concentric circular regions. It is assumed that one of the existent segmentation methods provided the segmentation of the vessels. The proposed compression method transmits losslessly the segmentation image, and then transmits the eye fundus part, or the vessels image, or both, conditional on the vessels segmentation. The independent compression of the two color image segments is performed using a sparse predictive method. Experiments are provided over a database of retinal images containing manual and estimated segmentations. The codelength of encoding the overall image, including the segmentation and the image segments, proves to be better than the codelength for the entire image obtained by JPEG2000 and other publicly available compressors.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper offers blueprints for and reports upon three years experience from teaching the university course “Lean Software Startup” for information technology and economics students. The course aims to give a learning experience on ideation/innovation and subsequent product and business development using the lean startup method. The course educates the students in software business, entrepreneurship, teamwork and the lean startup method. The paper describes the pedagogical design and practical implementation of the course in sufficient detail to serve as an example of how entrepreneurship and business issues can be integrated into a software engineering curriculum. The course is evaluated through learning diaries and a questionnaire, as well as the primary teacher’s learnings in the three course instances. We also examine the course in the context of CDIO and show its connection points to this broader engineering education framework. Finally we discuss the challenges and opportunities of engaging students with different backgrounds in a hands-on entrepreneurial software business course.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper introduces a novel multicore scheduling method that leverages a parameterized dataflow Model of Computation (MoC). This method, which we have named Just-In-Time Multicore Scheduling (JIT-MS), aims to efficiently schedule Parameterized and Interfaced Synchronous DataFlow (PiSDF) graphs on multicore architectures. This method exploits features of PiSDF to And locally static regions that exhibit predictable communications. This paper uses a multicore signal processing benchmark to demonstrate that the JIT-MS scheduler can exploit more parallelism than a conventional multicore task scheduler based on task creation and dispatch. Experimental results of the JIT-MS on an 8-core Texas Instruments Keystone Digital Signal Processor (DSP) are compared with those obtained from the OpenMP implementation provided by Texas Instruments. Results shows latency improvements of up to 26% for multicore signal processing systems.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We investigate how IT-capability leads to more interaction business practices, both through inter-organizational systems (IOS) and social media (SM), and how they further lead to marketing effectiveness and firm success. After analyzing the data collected from manufacturers (N=504), we find that (1) IT capability has a significant positive effect on both IOS-enabled and SM-enabled interaction practices; (2) IOS-enabled interaction practice has significant positive effects on both marketing performance and financial performance, while SM-enabled interaction practice only has a significant positive effect on the market performance; (3) both IOS-enabled interaction practice and SM-enabled interaction practice partly mediate the positive influence of IT capability on marketing performance and financial performance; (4) marketing performance partly mediates the positive impact of IOS-enabled interaction practice and fully mediates the positive impact of SM-enabled interaction practice on financial performance.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The boundary between hedonic and utilitarian information systems has become increasingly blurred during recent years due to the rise of developments such as gamification. Therefore, users may perceive the purpose of the same system differently, ranging from pure utility to pure play. However, in literature that addresses why people adopt and use information systems, the relationship between the users conception of the purpose of the system, and their experience and use of it has not yet been investigated. Therefore, in this study we investigate the interaction effects between users’ utility-fun conceptions of the system and the perceived enjoyment and usefulness from its use, on their post-adoption intentions (continued use, discontinued use, and contribution). We employ survey data collected among users (N = 562) of a gamified crowdsourcing application that represents a system affording both utility and leisure use potential. The results show that the more fun-oriented users conceive the system to be, the more enjoyment affects continued and discontinued use intentions, and the less ease of use affects the continued use intention. Therefore, users’ conceptions of the system prove to be an influential aspect of system use and should particularly be considered when designing modern multi-purposed systems such as gamified information systems.
Research output: Contribution to journal › Article › Scientific › peer-review
Interpretation of ambiguous images perceived visually and relying on supplementary information coordinated with pictorial cues was selected to evaluate the usefulness of the StickGrip device. The ambiguous visual models were achromatic images composed from only two overlapping ellipses with various brightness gradients and relative position of the components. Inspection of images by the tablet pen enhanced with the pencil-like visual pointer decreased discrepancy between their actual interpretation and expected decision by only about 2.6 for concave and by about 1.3 for convex models. Interpretation of the convex images ambiguous with their inverted concave counterparts inspected by the StickGrip device achieved three times less discrepancy between decisions made and expected. Interpretation of the concave images versus inverted convex counterparts was five times more accurate with the use of the StickGrip device. We conclude that the kinesthetic and proprioceptive cues delivered by the StickGrip device had a positive influence on the decision-making under ambiguous conditions.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In the Internet of Things (IoT), machines and devices are equipped with sensors and Internet connections that makes it possible to collect data and store this data to cloud services. In vocational education and training, the stored data can be used to improve decision-making processes. With the help of this data, a teacher can also get a more accurate picture of the current state of the education environment than before. IoT should be integrated into vocational education and training because IoT will help to achieve important educational objectives. IoT is able to promote students' preparation for working life, the safety of education environment, self-directed learning, and effective learning. It can also improve the efficient use of educational resources. In additional, IoT based solutions should be introduced so that students would have a vision of new types of IoT skill requirements before they enter the labour market. In this paper, we presents IoT related aspects that enable to meet the above-mentioned educational objectives. By implementing a pilot project, we aim to concretise IoT's possibilities in the education sector.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The agricultural sector in Finland has been lagging behind in digital development. Development has long been based on increasing production by investing in larger machines. Over the past decade, change has begun to take place in the direction of digitalization. One of the challenges is that different manufacturers are trying to get farmers' data on their own closed cloud services. In the worst case, farmers may lose an overall view of their farms and opportunities for deeper data analysis because their data is located in different services. The goals and previously studied challenges of the 'MIKÄ DATA' project are described in this research. This project will build an intelligent data service for farmers, which is based on the Oskari platform. In the 'Peltodata' service, farmers can see their own field data and many other data sources layer by layer. The project is focused on the study of machine learning techniques to develop harvest yield prediction and find out the correlation between many data sources. The 'Peltodata' service will be ready at the end of 2019.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
As the variety of off-the-shelf processors expands, traditional implementation methods of systems for digital signal processing and communication are no longer adequate to achieve design objectives in a timely manner. There is a necessity for designers to easily track the changes in computing platforms, and apply them efficiently while reusing legacy code and optimized libraries that target specialized features in single processing units. In this context, we propose an integration workflow to schedule and implement Software Defined Radio (SDR) protocols that are developed using the GNU Radio environment on heterogeneous multiprocessor platforms. We show how to utilize Single Instruction Multiple Data (SIMD) units provided in Graphics Processing Units (GPUs) along with vector accelerators implemented in General Purpose Processors (GPPs). We augment a popular SDR framework (i.e, GNU Radio) with a library that seamlessly allows offloading of algorithm kernels mapped to the GPU without changing the original protocol description. Experimental results show how our approach can be used to efficiently explore design spaces for SDR system implementation, and examine the overhead of the integrated backend (software component) library.
Research output: Contribution to journal › Article › Scientific › peer-review
Dataflow modeling offers a myriad of tools for designing and optimizing signal processing systems. A designer is able to take advantage of dataflow properties to effectively tune the system in connection with functionality and different performance metrics. However, a disparity in the specification of dataflow properties and the final implementation can lead to incorrect behavior that is difficult to detect. This motivates the problem of ensuring consistency between dataflow properties that are declared or otherwise assumed as part of dataflow-based application models, and the dataflow behavior that is exhibited by implementations that are derived from the models. In this paper, we address this problem by introducing a novel dataflow validation framework (DVF) that is able to identify disparities between an application’s formal dataflow representation and its implementation. DVF works by instrumenting the implementation of an application and monitoring the instrumentation data as the application executes. This monitoring process is streamlined so that DVF achieves validation without major overhead. We demonstrate the utility of our DVF through design and implementation case studies involving an automatic speech recognition application, a JPEG encoder, and an acoustic tracking application.
Research output: Contribution to journal › Article › Scientific › peer-review
Organizations often adopt enterprise architecture (EA) when planning how best to develop their information technology (IT) or businesses, for strategic management, or generally for managing change initiatives. This variety of different uses affects many stakeholders within and between organizations. Because stakeholders have dissimilar backgrounds, positions, assumptions, and activities, they respond differently to changes and the potential problems that emerge from those changes. This situation creates contradictions and conflicts between stakeholders that may further influence project activities and ultimately determine how EA is adopted. In this paper, we examine how institutional pressures influence EA adoption. Based on a qualitative case study of two cases, we show how regulative, normative, and cognitive pressures influence stakeholders’ activities and behaviors during the process of EA adoption. Our contribution thus lies in identifying roles of institutional pressures in different phases during the process of EA adoption and how it changes overtime. The results provide insights into EA adoption and the process of institutionalization, which help to explain emergent challenges in EA adoption.
EXT="Dang, Duong"
Research output: Contribution to journal › Article › Scientific › peer-review
Location based and geo-context aware services form the new fast growing domain of commercially successful ICT solutions. These services play the key role in IoT scenarios and development of smart spaces and proactive solutions. One of the most attractive application areas is e-Tourism. More people can afford travelling and over the last few decades we see continues growth of the tourist activity. At the same time we see huge increase of demand both in quantity and quality of tourist services. Many experts foresee that this growth cannot any longer be fulfilled by applying traditional approaches. Similarly to the change in tickets and hotel booking, it is expected that soon we will witness major transformation in the whole industry towards e-Tourism driven market, where roles of traditional service providers, e.g., tourist agents, guides, will disappear or seriously changed. Internet of Things (IoT) is an integral part of the Future Internet ecosystem that has major impact on development of e-Tourism services. IoT provides an infrastructure to uniquely identify and link physical objects with virtual representations. As a result any physical object can have virtual reflection in the service space. This gives an opportunity to replace actions on physical objects by operations on their virtual reflections, which is faster, cheaper and more comfortable for the user. In this paper we summarize our research in the field, share ideas of innovative e-Tourism services and present Geo2Tag LBS platform that allows easy and fast development of such services.
EXT="Balandin, Sergey"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Task-based information access is a significant context for studying information interaction and for developing information retrieval (IR) systems. Molecular medicine (MM) is an informationintensive and rapidly growing task domain, which aims at providing new approaches to the diagnosis, prevention and treatment of various diseases. The development of bioinformatics databases and tools has led to an extremely distributed information environment. There are numerous generic and domain-specific tools and databases available for online information access. This renders MM as a fruitful context for research in task-based IR. The present paper examines empirically task-based information access in MM and analyzes task processes as contexts of information access and interaction, integrated use of resources in information access and the limitations of (simple server-side) log analysis in understanding information access, retrieval sessions in particular. We shed light on the complexity of the between-systems interaction. The findings suggest that the system development should not be done in isolation as there is considerable interaction between them in real world use. We also classify system-level strategies of information access integration that can be used to reduce the amount of manual system integration by task performers.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Due to the networked nature of modern industrial business, repeated information exchange activities are necessary. Unfortunately, information exchange is both laborious and expensive with the current communication media, which causes errors and delays. To increase the efficiency of communication, this study introduces an architecture to exchange information in a digitally processable manner in industrial ecosystems. The architecture builds upon commonly agreed business practices and data formats, and an open consortium and information mediators enable it. Following the architecture, a functional prototype has been implemented for a real industrial scenario. This study has its focus on the technical information of equipment, but the architecture concept can also be applied in financing and logistics. Therefore, the concept has potential to completely reform industrial communication.
Research output: Contribution to journal › Article › Scientific › peer-review
Context: Software companies seek to gain benefit from agile development approaches in order to meet evolving market needs without losing their innovative edge. Agile practices emphasize frequent releases with the help of an automated toolchain from code to delivery. Objective: We investigate, which tools are used in software delivery, what are the reasons omitting certain parts of the toolchain and what implications toolchains have on how rapidly software gets delivered to customers. Method: We present a multiple-case study of the toolchains currently in use in Finnish software-intensive organizations interested in improving their delivery frequency. We conducted qualitative semi-structured interviews in 18 case organizations from various software domains. The interviewees were key representatives of their organization, considering delivery activities. Results: Commodity tools, such as version control and continuous integration, were used in almost every organization. Modestly used tools, such as UI testing and performance testing, were more distinctly missing from some organizations. Uncommon tools, such as artifact repository and acceptance testing, were used only in a minority of the organizations. Tool usage is affected by the state of current workflows, manual work and relevancy of tools. Organizations whose toolchains were more automated and contained fewer manual steps were able to deploy software more rapidly. Conclusions: There is variety in the need for tool support in different development steps as there are domain-specific differences in the goals of the case organizations. Still, a well-founded toolchain supports speedy delivery of new software.
Research output: Contribution to journal › Article › Scientific › peer-review
Efficient sample rate conversion is of widespread importance in modern communication and signal processing systems. Although many efficient kinds of polyphase filterbank structures exist for this purpose, they are mainly geared toward serial, custom, dedicated hardware implementation for a single task. There is, therefore, a need for more flexible sample rate conversion systems that are resource-efficient, and provide high performance. To address these challenges, we present in this paper an all-software-based, fully parallel, multirate resampling method based on graphics processing units (GPUs). The proposed approach is well-suited for wireless communication systems that have simultaneous requirements on high throughput and low latency. Utilizing the multidimensional architecture of GPUs, our design allows efficient parallel processing across multiple channels and frequency bands at baseband. The resulting architecture provides flexible sample rate conversion that is designed to address modern communication requirements, including real-time processing of multiple carriers simultaneously.
Research output: Contribution to journal › Article › Scientific › peer-review
The target of this paper is to analyze the impact of variation in antenna radiation pattern on the performance of Single Path Multiple Access (SPMA) in urban/dense-urban environment. For this study, an extended 3GPP antenna model, and 3D building data from an urban area of Helsinki city is used. The simulations are performed at 28 GHz frequency using “sAGA” a MATLAB based 3D ray tracing tool. The variables considered for the series of simulations are Front to Back Ratio (FBR), Side Lobe Level (SLL), and Half Power Beamwidth (HPBW) of an antenna in horizontal and vertical plane. Network performance is compared in terms of metrics like signal strength, SINR, and capacity. This paper also presents the spectral efficiency and power efficiency analysis. The performance of SPMA was found susceptible to the change in antenna radiation pattern, and the simulation results show a significant impact of radiation pattern on the capacity gain offered by SPMA. Interestingly, SPMA was found a fairly power efficient solution with respect to the traditional macro cellular network approach. However, the level of power efficiency heavily depends upon the antenna beamwidth and on other beam parameters.
Research output: Contribution to journal › Article › Scientific › peer-review
Wireless standards are evolving rapidly due to the exponential growth in the number of portable devices along with the applications with high data rate requirements. Adaptable software based signal processing implementations for these devices can make the deployment of the constantly evolving standards faster and less expensive. The flagship technology from the IEEE WLAN family, the IEEE 802.11ac, aims at achieving very high throughputs in local area connectivity scenarios. This article presents a software based implementation for the Multiple Input and Multiple Output (MIMO) transmitter and receiver baseband processing conforming to the IEEE 802.11ac standard which can achieve transmission bit rates beyond 1Gbps. This work focuses on the Physical layer frequency domain processing. Various configurations, including 2×2 and 4×4 MIMO are considered for the implementation. To utilize the available data and instruction level parallelism, a DSP core with vector extensions is selected as the implementation platform. Then, the feasibility of the presented software-based solution is assessed by studying the number of clock cycles and power consumption of the different scenarios implemented on this core. Such Software Defined Radio based approaches can potentially offer more flexibility, high energy efficiency, reduced design efforts and thus shorter time-to-market cycles in comparison with the conventional fixed-function hardware methods.
ORG=elt,0.5
ORG=tie,0.5
Research output: Contribution to journal › Article › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Context: Since its inception around 2010, gamification has become one of the top technology and software trends. However, gamification has also been regarded as one of the most challenging areas of software engineering. Beyond traditional software design requirements, designing gamification requires the command of disciplines such as (motivational/behavioral) psychology, game design, and narratology, making the development of gamified software a challenge for traditional software developers. Gamification software inhabits a finely tuned niche of software engineering that seeks for both high functionality and engagement; beyond technical flawlessness, gamification has to motivate and affect users. Consequently, it has also been projected that most gamified software is doomed to fail. Objective: This paper seeks to advance the understanding of designing gamification and to provide a comprehensive method for developing gamified software. Method: We approach the research problem via a design science research approach; firstly, by synthesizing the current body of literature on gamification design methods and by interviewing 25 gamification experts, producing a comprehensive list of design principles for developing gamified software. Secondly, and more importantly, we develop a detailed method for engineering of gamified software based on the gathered knowledge and design principles. Finally, we conduct an evaluation of the artifacts via interviews of ten gamification experts and implementation of the engineering method in a gamification project. Results: As results of the study, we present the method and key design principles for engineering gamified software. Based on the empirical and expert evaluation, the developed method was deemed as comprehensive, implementable, complete, and useful. We deliver a comprehensive overview of gamification guidelines and shed novel insights into the nature of gamification development and design discourse. Conclusion: This paper takes first steps towards a comprehensive method for gamified software engineering.
Research output: Contribution to journal › Article › Scientific › peer-review
Context. In recent years, smells, also referred to as bad smells, have gained popularity among developers. However, it is still not clear how harmful they are perceived from the developers’ point of view. Many developers talk about them, but only few know what they really are, and even fewer really take care of them in their source code. Objective. The goal of this work is to understand the perceived criticality of code smells both in theory, when reading their description, and in practice. Method. We executed an empirical study as a differentiated external replication of two previous studies. The studies were conducted as surveys involving only highly experienced developers (63 in the first study and 41 in the second one). First the perceived criticality was analyzed by proposing the description of the smells, then different pieces of code infected by the smells were proposed, and finally their ability to identify the smells in the analyzed code was tested. Results. According to our knowledge, this is the largest study so far investigating the perception of code smells with professional software developers. The results show that developers are very concerned about code smells in theory, nearly always considering them as harmful or very harmful (17 out of 23 smells). However, when they were asked to analyze an infected piece of code, only few infected classes were considered harmful and even fewer were considered harmful because of the smell. Conclusions. The results confirm our initial hypotheses that code smells are perceived as more critical in theory but not as critical in practice.
Research output: Contribution to journal › Article › Scientific › peer-review
Research output: Contribution to journal › Article › Scientific
Research output: Contribution to journal › Editorial › Scientific
Research output: Contribution to journal › Editorial › Scientific
In this paper, we propose a novel extension of the extreme learning machine (ELM) algorithm for single-hidden layer feedforward neural network training that is able to incorporate subspace learning (SL) criteria on the optimization process followed for the calculation of the network's output weights. The proposed graph embedded ELM (GEELM) algorithm is able to naturally exploit both intrinsic and penalty SL criteria that have been (or will be) designed under the graph embedding framework. In addition, we extend the proposed GEELM algorithm in order to be able to exploit SL criteria in arbitrary (even infinite) dimensional ELM spaces. We evaluate the proposed approach on eight standard classification problems and nine publicly available datasets designed for three problems related to human behavior analysis, i.e., the recognition of human face, facial expression, and activity. Experimental results denote the effectiveness of the proposed approach, since it outperforms other ELM-based classification schemes in all the cases.
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper we study the fault tolerance of gene networks. We assume single gene knockouts and investigate the effect this kind of perturbation has on the communication between genes globally. For our study we use directed scale-free networks resembling gene networks, e.g., signaling or proteinprotein interaction networks, and define a Markov process based on the network topology to model communication. This allows us to evaluate the spread of information in the network and, hence, detect differences due to single gene knockouts in the gene-gene communication asymptotically.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
To help developers during the Scrum planning poker, in our previous work we ran a case study on a Moonlight Scrum process to understand if it is possible to introduce functional size metrics to improve estimation accuracy and to measure the accuracy of expert-based estimation. The results of this original study showed that expert-based estimations are more accurate than those obtained by means of models, calculated with functional size measures. To validate the results and to extend them to plain Scrum processes, we replicated the original study twice, applying an exact replication to two plain Scrum development processes. The results of this replicated study show that the accuracy of the effort estimated by the developers is very accurate and higher than that obtained through functional size measures. In particular, SiFP and IFPUG Function Points, have low predictive power and are thus not help to improve the estimation accuracy in Scrum.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper summarizes the results of the NATO STO IST Panel's Exploratory Team IST-ET-101. The team studied the full-duplex radio technology as an innovative solution to deal with the scarce and congested electromagnetic frequency spectrum, especially in the VHF and UHF bands. This scarcity is in strong contrast to the growing bandwidth requirements generally and particularly in the military domain. The success of future NATO operations relies more than ever on new real-time services going hand in hand with increased data throughputs as well as with robustness against and compatibility with electronic warfare. Therefore, future tactical communication and electronic warfare technologies must aim at exploiting the spectral resources to the maximum while at the same time providing NATO with an advantage in the tactical environment.
jufoid=73201
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We present a structural data set of the 20 proteinogenic amino acids and their amino-methylated and acetylated (capped) dipeptides. Different protonation states of the backbone (uncharged and zwitterionic) were considered for the amino acids as well as varied side chain protonation states. Furthermore, we studied amino acids and dipeptides in complex with divalent cations (Ca2+, Ba2+, Sr2+, Cd2+, Pb2+, and Hg2+). The database covers the conformational hierarchies of 280 systems in a wide relative energy range of up to 4 eV (390 kJ/mol), summing up to a total of 45,892 stationary points on the respective potential-energy surfaces. All systems were calculated on equal first-principles footing, applying density-functional theory in the generalized gradient approximation corrected for long-range van der Waals interactions. We show good agreement to available experimental data for gas-phase ion affinities. Our curated data can be utilized, for example, for a wide comparison across chemical space of the building blocks of life, for the parametrization of protein force fields, and for the calculation of reference spectra for biophysical applications.
Research output: Contribution to journal › Article › Scientific › peer-review
As it has evolved, the Internet has had to support a broadening range of networking technologies, business models and user interaction modes. Researchers and industry practitioners have realised that this trend necessitates a fundamental rethinking of approaches to network and service management. This has spurred significant research efforts towards developing autonomic network management solutions incorporating distributed self-management processes inspired by biological systems. Whilst significant advances have been made, most solutions focus on management of single network domains and the optimisation of specific management or control processes therein. In this paper we argue that a networking infrastructure providing a myriad of loosely coupled services must inherently support federation of network domains and facilitate coordination of the operation of various management processes for mutual benefit. To this end, we outline a framework for federated management that facilitates the coordination of the behaviour of bio-inspired management processes. Using a case study relating to distribution of IPTV content, we describe how Federal Relationship Managers realising our layered model of management federations can communicate to manage service provision across multiple application/storage/ network providers. We outline an illustrative example in which storage providers are dynamically added to a federation to accommodate demand spikes, with appropriate content being migrated to those providers servers under control of a bio-inspired replication process.
Research output: Contribution to journal › Article › Scientific › peer-review
Farm detection using low resolution satellite images is an important topic in digital agriculture. However, it has not received enough attention compared to high-resolution images. Although high resolution images are more efficient for detection of land cover components, the analysis of low-resolution images are yet important due to the low-resolution repositories of the past satellite images used for timeseries analysis, free availability and economic concerns. The current paper addresses the problem of farm detection using low resolution satellite images. In digital agriculture, farm detection has significant role for key applications such as crop yield monitoring. Two main categories of object detection strategies are studied and compared in this paper; First, a two-step semi-supervised methodology is developed using traditional manual feature extraction and modelling techniques; the developed methodology uses the Normalized Difference Moisture Index (NDMI), Grey Level Co-occurrence Matrix (GLCM), 2-D Discrete Cosine Transform (DCT) and morphological features and Support Vector Machine (SVM) for classifier modelling. In the second strategy, high-level features learnt from the massive filter banks of deep Convolutional Neural Networks (CNNs) are utilised. Transfer learning strategies are employed for pretrained Visual Geometry Group Network (VGG-16) networks. Results show the superiority of the high-level features for classification of farm regions.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Partial order methods alleviate state explosion by considering only a subset of actions in each constructed state. The choice of the subset depends on the properties that the method promises to preserve. Many methods have been developed ranging from deadlock-preserving to CTL(Formula presented.)-preserving and divergence-sensitive branching bisimilarity preserving. The less the method preserves, the smaller state spaces it constructs. Fair testing equivalence unifies deadlocks with livelocks that cannot be exited and ignores the other livelocks. It is the weakest congruence that preserves whether or not the system may enter a livelock that it cannot leave. We prove that a method that was designed for trace equivalence also preserves fair testing equivalence. We demonstrate its effectiveness on a protocol with a connection and data transfer phase. This is the first practical partial order method that deals with a practical fairness assumption.
Research output: Contribution to journal › Article › Scientific › peer-review
Recent advances in Terrestrial Laser Scanner (TLS), in terms of cost and flexibility, have consolidated this technology as an essential tool for the documentation and digitalization of Cultural Heritage. However, once the TLS data is used, it basically remains stored and left to waste. How can highly accurate and dense point clouds (of the built heritage) be processed for its reuse, especially to engage a broader audience? This paper aims to answer this question by a channel that minimizes the need for expert knowledge, while enhancing the interactivity with the as-built digital data: Virtual Heritage Dissemination through the production of VR content. Driven by the ProDigiOUs project's guidelines on data dissemination (EU funded), this paper advances in a production path to transform the point cloud into virtual stereoscopic spherical images, taking into account the different visual features that produce depth perception, and especially those prompting visual fatigue while experiencing the VR content. Finally, we present the results of the Hiedanranta's scans transformed into stereoscopic spherical animations.
jufoid=83846
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Dataflow descriptions have been used in a wide range of Digital Signal Processing (DSP) applications, such as multi-media processing, and wireless communications. Among various forms of dataflow modeling, Synchronous Dataflow (SDF) is geared towards static scheduling of computational modules, which improves system performance and predictability. However, many DSP applications do not fully conform to the restrictions of SDF modeling. More general dataflow models, such as CAL (Eker and Janneck 2003), have been developed to describe dynamically-structured DSP applications. Such generalized models can express dynamically changing functionality, but lose the powerful static scheduling capabilities provided by SDF. This paper focuses on the detection of SDF-like regions in dynamic dataflow descriptions-in particular, in the generalized specification framework of CAL. This is an important step for applying static scheduling techniques within a dynamic dataflow framework. Our techniques combine the advantages of different dataflow languages and tools, including CAL (Eker and Janneck 2003), DIF (Hsu et al. 2005) and CAL2C (Roquier et al. 2008). In addition to detecting SDF-like regions, we apply existing SDF scheduling techniques to exploit the static properties of these regions within enclosing dynamic dataflow models. Furthermore, we propose an optimized approach for mapping SDF-like regions onto parallel processing platforms such as multi-core processors.
Research output: Contribution to journal › Article › Scientific › peer-review
Reliability is a very important non-functional aspect for software systems and artefacts. In literature, several definitions of software reliability exist and several methods and approaches exist to measure reliability of a software project. However, in the literature no works focus on the applicability of these methods in all the development phases of real software projects. In this paper, we describe the methodology we adopted during the S-CASE FP7 European Project to predict reliability for both the S-CASE platform as well as for the software artefacts automatically generated by using the S-CASE platform. Two approaches have been adopted to compute reliability: The first one is the Rome Lab Model, a well adopted traditional approach in industry; the second one is an empirical approach defined by the authors in a previous work. An extensive dataset of results has been collected during all the phases of the project. The two approaches can complement each other, to support to prediction of reliability during all the development phases of a software system in order to facilitate the project management from a non-functional point-of-view.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Cyber-attacks have grown in importance to become a matter of national security. A growing number of states and organisations around the world have been developing defensive and offensive capabilities for cyber warfare. Security criteria are important tools for defensive capabilities of critical communications and information systems (CIS). Various criteria have been developed for designing, implementing and auditing CIS. The paper is based on work done from 2008 to 2016 at FICORA, the Finnish Communications Regulatory Authority. FICORA has actively participated in development and usage of three versions of Katakri, the Finnish national security audit criteria. Katakri is a tool for assessing the capability of an organisation to safeguard classified information. While built for governmental security authorities, usefulness for the private sector has been a central design goal of the criteria throughout its development. Experiences were gathered from hundreds of CIS security audits conducted against all versions of Katakri. Feedback has been gathered also from CIS audit target organisations including governmental authorities and the private sector, from other Finnish security authorities, from FICORA's accredited third party Information Security Inspection Bodies, and from public sources. This paper presents key lessons learnt and discusses recommendations for the design and implementation of security criteria. Security criteria have significant direct impacts on CIS design and implementation. Criteria design is always a trade-off between the varying goals of the target users. Katakri has tried to strike a balance between the different needs for security criteria. The paper recommends that criteria design should stem from a small set of strictly defined use cases. Trying to cover the needs of a wide variety of different use cases quickly renders the criteria useless as an assessment tool. In order to provide sufficient information assurance, security criteria should describe requirements on a reasonably concrete level, but also provide support for the security and risk management processes of the target users.
JUFOID=71915
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper highlights the performance of single path multiple access (SPMA) and discusses the performance comparison between higher order sectorization and SPMA in a macrocellular environment. The target of this paper is to emphasize the gains and significance of the novel concept of SPMA in achieving better and homogeneous SIR and enhanced system capacity in a macrocellular environment. This paper also explains the algorithm of SIR computation in SPMA. The results presented in this paper are based on sophisticated 3D ray tracing simulations performed with real world 3D building data and site locations from Seoul, South Korea. Macrocellular environment dominated with indoor users was considered for the research purpose of this paper. It is found that by increasing the order of sectorization, SIR along with spectral efficiency degrades due to the increase in inter-cell interference. However, as a result of better area spectral efficiency due to increased number of sectors (cells), the higher order sectorization offers more system capacity compared to the traditional 3-sector site. Furthermore, SPMA shows an outstanding performance and significantly improves the SIR for the individual user over the whole coverage area, and also remarkably increases the system capacity. In the environment under consideration, the simulation results reveal that SPMA can offer approximately 424 times more system capacity compared to the reference case of 3-sector site.
Research output: Contribution to journal › Article › Scientific › peer-review
Local binary pattern (LBP) is a texture operator that is used in several different computer vision applications requiring, in many cases, real-time operation in multiple computing platforms. The irruption of new video standards has increased the typical resolutions and frame rates, which need considerable computational performance. Since LBP is essentially a pixel operator that scales with image size, typical straightforward implementations are usually insufficient to meet these requirements. To identify the solutions that maximize the performance of the real-time LBP extraction, we compare a series of different implementations in terms of computational performance and energy efficiency, while analyzing the different optimizations that can be made to reach real-time performance on multiple platforms and their different available computing resources. Our contribution addresses the extensive survey of LBP implementations in different platforms that can be found in the literature. To provide for a more complete evaluation, we have implemented the LBP algorithms in several platforms, such as graphics processing units, mobile processors and a hybrid programming model image coprocessor. We have extended the evaluation of some of the solutions that can be found in previous work. In addition, we publish the source code of our implementations.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper presents an experimental study aimed to investigate the impact of haptic feedback when trying to evaluate quantitatively the topographic heights depicted by height tints. In particular, the accuracy of detecting the heights has been evaluated visually and instrumentally by using the new StickGrip haptic device. The participants were able to discriminate the required heights specified in the scale bar palette and to detect these values within an assigned map region. It was demonstrated that the complementary haptic feedback increased the accuracy of visual estimation of the topographic heights by about 32%.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The purpose of this research is to examine why organizations with similar objectives and environments at the beginning obtain different outcomes when implementing enterprise architecture (EA) projects and how EA institutionalization process occurs. We conduct a qualitative multiple-case study using the lens of institutional theory through the analysis of intra-organization relations. The results show that the institutional logic of stakeholders can drive EA projects in different directions during the process of EA institutionalization, and thus organizations obtain different project outcomes ultimately. We contribute by extending the knowledge on EA institutionalization from a micro-level perspective, understanding and explaining how the organizational structure was shaped and influenced by stakeholders’ relations, as well as providing insight into stakeholders’ behaviors and activities during the process of EA institutionalization so that practitioners may improve the success rate of EA projects, particularly in the public sector.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
All-encompassing digitalization and the digital skills gap pressure the current school system to change. Accordingly, to 'digi-jump', the Finnish National Curriculum 2014 (FNC-2014) adds programming to K-12 math. However, we claim that the anticipated addition remains too vague and subtle. Instead, we should take into account education recommendations set by computer science organizations, such as ACM, and define clear learning targets for programming. Correspondingly, the whole math syllabus should be critically viewed in the light of these changes and the feedback collected from SW professionals and educators. These findings reveal an imbalance between supply and demand, i.e., what is over-taught versus under-taught, from the point of view of professional requirements. Critics claim an unnecessary surplus of calculus and differential equations, i.e., continuous mathematics. In contrast, the emphasis should shift more towards algorithms and data structures, flexibility in handling multiple data representations, logic; in summary - discrete mathematics.
EXT="Valmari, Antti"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Today, software teams can deploy new software versions to users at an increasing speed – even continuously. Although this has enabled faster responding to changing customer needs than ever before, the speed of automated customer feedback gathering has not yet blossomed out at the same level. For these purposes, the automated collecting of quantitative data about how users interact with systems can provide software teams with an interesting alternative. When starting such a process, however, teams are faced immediately with difficult decision making: What kind of technique should be used for collecting user-interaction data? In this paper, we describe the reasons for choosing specific collecting techniques in three cases and refine a previously designed selection framework based on their data. The study is a part of on-going design science research and was conducted using case study methods. A few distinct criteria which practitioners valued the most arose from the results.
JUFOID=71106
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Technology-orientation and coding are gaining momentum in Finnish curriculum planning for primary and secondary school. However, according to the existing plans, the scope of ICT teaching is limited to practical topics, e.g., how to drill basic control structures (if-then-else, for, while) without focusing on the high level epistemological view of ICT. This paper proposes some key extensions to such plans, targeted to highlight rather the epistemological factors of teaching than talk about concrete means of strengthening the craftsmanship of coding. The proposed approach stems from the qualitative data collected by interviewing ICT professionals (N=7, 4 males, 3 females), who have gained experience of the industry needs while working as ICT professionals (avg=11.3 y, s=3.9 y). This work illustrates a holistic model of ICT teaching as well as suggests a set of new methods and tools.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This study structures the ecosystem literature by using a bibliometrical approach in analysing theoretical roots of ecosystem studies. Several disciplines, such as innovation, management and software studies have established own streams in the ecosystem research. This paper reports the results of analysing 601 articles from the Thomson Reuters Web of Science database, and identifies ten separate research communities which have established their own thematic ecosystem disciplines. We show that five sub-communities have emerged inside the field of software ecosystems. The software ecosystem literature draws its theoretical background from (1) technical, (2) research methodology, (3) business, (4) management, and (5) strategy oriented disciplines. The results pave the way for future research by illustrating the existing and missing links and directions in the field of the software ecosystem.
JUFOID=71106
EXT="Hyrynsalmi, Sami"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Owing to a steadily increasing demand for efficient spectrum utilization as part of the fifth-generation (5G) cellular concept, it becomes crucial to revise the existing radio spectrum management techniques and provide more flexible solutions for the corresponding challenges. A new wave of spectrum policy reforms can thus be envisaged by producing a paradigm shift from static to dynamic orchestration of shared resources. The emerging Licensed Shared Access (LSA) regulatory framework enables flexible spectrum sharing between a limited number of users that access the same frequency bands, while guaranteeing better interference mitigation. In this work, an advanced user satisfaction-aware spectrum management strategy for dynamic LSA management in 5G networks is proposed to balance both the connected user satisfaction and the Mobile Network Operator (MNO) resource utilization. The approach is based on the MNO decision policy that combines both pricing and rejection rules in the implemented processes. Our study offers a classification built over several types of users, different corresponding attributes, and a number of MNO's decision scenarios. Our investigations are built on Criteria-Based Resource Management (CBRM) framework, which has been specifically designed to facilitate dynamic LSA management in 5G mobile networks. To verify the proposed model, the results (spectrum utilization, estimated Secondary User price for the future connection, and user selection methodology in case of user rejection process) are validated numerically as we yield important conclusions on the applicability of our approach, which may offer valuable guidelines for efficient radio spectrum management in highly dynamic and heterogeneous 5G environments.
Research output: Contribution to journal › Article › Scientific › peer-review
The interpretive grounded theory (GT) study analyses information system (IS) enabled organizational change in two private sector organizations. These two organizations, who are long term partners, were developing a new IS product to divergent markets. The data was gathered through 15 interviews, conducted at the phase of initial rollouts. The findings focus on the results of the theoretical coding phase in which selective codes, referred to as change management activities, are related to each other. As a theoretical contribution, the dynamic structure presents how the change management activities appear differently, depending on a set of choices. Several paradoxical situations stemmed from inconsistencies and/or tensions, because the choices did not support the targeted change management activities. The study thus proposes that there is an increasing demand to analyze the sources of paradoxical situations. Paradoxical situations in these five opposing forces were identified: long term vs. short term, macro vs. micro, past vs. future, centralized vs. distributed, and control vs. trust/self-organization. Some paradoxical situations arose because of the nature of the trust-based IS partnership, while others were socially constructed as a result of unintended consequences of actions in relation to the strategic goals. Managerial efforts are increasingly required for identifying paradoxical situations at an early stage and for considering the right balance for the opposing forces in the dynamic IS change process.
Research output: Contribution to journal › Article › Scientific › peer-review
Since the birth of computer and networks, fuelled by pervasive computing, Internet of Things and ubiquitous connectivity, the amount of data stored and transmitted has exponentially grown through the years. Due to this demand, new storage solutions are needed. One promising media is the DNA as it provides numerous advantages, which includes the ability to store dense information while achieving long-term reliability. However, the question as to how the data can be retrieved from a DNA-based archive, still remains. In this paper, we aim to address this question by proposing a new storage solution that relies on bacterial nanonetworks properties. Our solution allows digitally-encoded DNA to be stored into motility-restricted bacteria, which compose an archival architecture of clusters, and to be later retrieved by engineered motile bacteria, whenever reading operations are needed. We conducted extensive simulations, in order to determine the reliability of data retrieval from motility-restricted storage clusters, placed spatially at different locations. Aiming to assess the feasibility of our solution, we have also conducted wet lab experiments that show how bacteria nanonetworks can effectively retrieve a simple message, such as "Hello World", by conjugation with motility-restricted bacteria, and finally mobilize towards a target point for delivery.
Research output: Contribution to journal › Article › Scientific › peer-review
Increasingly, researchers have come to acknowledge that consumption activities entail both utilitarian and hedonic components. Whereas utilitarian consumption accentuates the achievement of predetermined outcomes typical of cognitive consumer behavior, its hedonic counterpart relates to affective consumer behavior in dealing with the emotive and multisensory aspects of the shopping experience. Consequently, while utilitarian consumption activities appeal to the rationality of customers in inducing their intellectual buy-in of the shopping experience, customers’ corresponding emotional buy-in can only be attained through the presence of hedonic consumption activities. The same can be said for online shopping. Because the online shopping environment is characterized by the existence of an IT-enabled web interface that acts as the focal point of contact between customers and vendors, its design should embed utilitarian and hedonic elements to create a holistic shopping experience. Building on Expectation Disconfirmation Theory (EDT), this study advances a research model that not only delineates between customers’ utilitarian and hedonic expectations for online shopping but also highlights how these expectations can be best served through functional and esthetic performance, respectively. Furthermore, we introduce online shopping experience (i.e., transactional frequency) as a moderator affecting not only how customers form utilitarian and hedonic expectations but also how they evaluate the functional and esthetic performances of e-commerce sites. The model is then empirically validated via an online survey questionnaire administered on a sample of 303 respondents. Theoretical contributions and pragmatic implications to be gleaned from our research model and its subsequent empirical validation are discussed.
Research output: Contribution to journal › Article › Scientific › peer-review
This article presents results on how students became engaged and motivated when using digital storytelling in knowledge creation in Finland, Greece and California. The theoretical framework is based on sociocultural theories. Learning is seen as a result of dialogical interactions between people, substances and artefacts. This approach has been used in the creation of the Global Sharing Pedagogy (GSP) model for the empirical study of student levels of engagement in learning twenty-first century skills. This model presents a set of conceptual mediators for student-driven knowledge creation, collaboration, networking and digital literacy. Data from 319 students were collected using follow-up questionnaires after the digital storytelling project. Descriptive statistical methods, correlations, analysis of variance and regression analysis were used. The mediators of the GSP model strongly predicted student motivation and enthusiasm as well as their learning outcomes. The digital storytelling project, using the technological platform Mobile Video Experience (MoViE), was very successful in teaching twenty-first century skills.
Research output: Contribution to journal › Article › Scientific › peer-review
Context: DevOps is considered important in the ability to frequently and reliably update a system in operational state. DevOps presumes cross-functional collaboration and automation between software development and operations. DevOps adoption and implementation in companies is non-trivial due to required changes in technical, organisational and cultural aspects. Objectives: This exploratory study presents detailed descriptions of how DevOps is implemented in practice. The context of our empirical investigation is web application and service development in small and medium sized companies. Method: A multiple-case study was conducted in five different development contexts with successful DevOps implementations since its benefits, such as quick releases and minimum deployment errors, were achieved. Data was mainly collected through interviews with 26 practitioners and observations made at the companies. Data was analysed by first coding each case individually using a set of predefined themes and thereafter perform a cross-case synthesis. Results: Our analysis yielded some of the following results: (i) software development team attaining ownership and responsibility to deploy software changes in production is crucial in DevOps. (ii) toolchain usage and support in deployment pipeline activities accelerates the delivery of software changes, bug fixes and handling of production incidents. (ii) the delivery speed to production is affected by context factors, such as manual approvals by the product owner (iii) steep learning curve for new skills is experienced by both software developers and operations staff, who also have to cope with working under pressure. Conclusion: Our findings contributes to the overall understanding of DevOps concept, practices and its perceived impacts, particularly in small and medium sized companies. We discuss two practical implications of the results.
EXT="Mikkonen, Tommi"
Research output: Contribution to journal › Article › Scientific › peer-review
Browsers have become the most common communication channel. We spend hours using them to get news and communicate with friends, far more time than communicating face-to face. WWW-based communication and content-creation for www will be the most common job in future work life for students specializing in software engineering. We expect our screens to be colorful and animated, thus students should understand technologies, which are used for e.g. for painting jumping Mario to screen. But massive flow of new software engineering ideas, technologies and frameworks which appear in all-increasing temp tend to make students passive receivers of descriptions of new menus and commands without giving them any possibility to investigate and understand, what is behind these menus and commands, killing their natural curiosity. There should be time to experiment, compare formats, technologies and investigate their relations. In the presentation are described experiments used for investigating, how different formats for describing animation in HTML5 document influence animation rendering speed.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Contribution to journal › Article › Scientific › peer-review
Dataflow programming has received increasing attention in the age of multicore and heterogeneous computing. Modular and concurrent dataflow program descriptions enable highly automated approaches for design space exploration, optimization and deployment of applications. A great advance in dataflow programming has been the recent introduction of the RVC-CAL language. Having been standardized by the ISO, the RVC-CAL dataflow language provides a solid basis for the development of tools, design methodologies and design flows. This paper proposes a novel design flow for mapping RVC-CAL dataflow programs to parallel and heterogeneous execution platforms. Through the proposed design flow the programmer can describe an application in the RVC-CAL language and map it to multi- and many-core platforms, as well as GPUs, for efficient execution. The functionality and efficiency of the proposed approach is demonstrated by a parallel implementation of a video processing application and a run-time reconfigurable filter for telecommunications. Experiments are performed on GPU and multicore platforms with up to 16 cores, and the results show that for high-performance applications the proposed design flow provides up to 4 × higher throughput than the state-of-the-art approach in multicore execution of RVC-CAL programs.
Research output: Contribution to journal › Article › Scientific › peer-review
The transport sector is constantly growing as well as its complexity and energy consumption. One way to reduce the involvement and the volume of data to evaluate and monitor the energy efficiency of the sector for cities authorities is by using Key Performance Indicators (KPIs). This paper describes a set of KPIs to measure and track energy efficiency in the transport sector. The KPIs that are summarized in this paper were identified based on a literature review of mobility projects/strategies/policies that had been implemented in cities around the world. Future applications, which are presented at the end of this article, will give a better understanding of the systems and its components.
AUX=ase,"Mantilla, R. M Fernanda"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Full use of the parallel computation capabilities of present and expected CPUs and GPUs requires use of vector extensions. Yet many actors in data flow systems for digital signal processing have internal state (or, equivalently, an edge that loops from the actor back to itself) that impose serial dependencies between actor invocations that make vectorizing across actor invocations impossible. Ideally, issues of inter-thread coordination required by serial data dependencies should be handled by code written by parallel programming experts that is separate from code specifying signal processing operations. The purpose of this paper is to present one approach for so doing in the case of actors that maintain state. We propose a methodology for using the parallel scan (also known as prefix sum) pattern to create algorithms for multiple simultaneous invocations of such an actor that results in vectorizable code. Two examples of applying this methodology are given: (1) infinite impulse response filters and (2) finite state machines. The correctness and performance of the resulting IIR filters and one class of FSMs are studied.
Research output: Contribution to journal › Article › Scientific › peer-review
Full use of the parallel computation capabilities of present and expected CPUs and CPUs require use of vector extensions. Yet many actors in data flow systems for digital signal processing have internal state (or, equivalently, an edge that loops from the actor back to itself) that impose serial dependencies between actor invocations that make vectorizing across actor invocations impossible. Ideally, issues of inter-thread coordination required by serial data dependencies should be handled by code written by parallel programming experts that is separate from code specifying signal processing operations. The purpose of this paper is to present one approach for so doing in the case of actors that maintain state. We propose a methodology for using the parallel scan (also known as prefix sum) pattern to create algorithms for multiple simultaneous invocations of such an actor that results in vectorizable code. Two examples of applying this methodology are given: (1) infinite impulse response filters and (2) finite state machines. The correctness and performance of the resulting IIR filters are studied.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper introduces the Resource Interface ontology intended to formally capture hardware interface information of production resources. It also proposes an interface matchmaking method, which uses this information to judge if two resources can be physically connected with each other. The matchmaking method works on two levels of detail, coarse and fine. The proposed Resource Interface ontology and matchmaking method can be utilised during production system design or reconfiguration by system integrators or end users. They will benefit from fast and automatic resource searches over large resource catalogues. In the end of the paper, a validation of the method is provided with a test ontology.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Dictionary learning is usually approached by looking at the support of the sparse representations. Recent years have shown results in dictionary improvement by investigating the cosupport via the analysis-based cosparse model. In this paper we present a new cosparse learning algorithm for orthogonal dictionary blocks that provides significant dictionary recovery improvements and representation error shrinkage. Furthermore, we show the beneficial effects of using this algorithm inside existing methods based on building the dictionary as a structured union of orthonormal bases.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This publication addresses two bottlenecks in the construction of minimal coverability sets of Petri nets: the detection of situations where the marking of a place can be converted to ω, and the manipulation of the set A of maximal ω-markings that have been found so far. For the former, a technique is presented that consumes very little time in addition to what maintaining A consumes. It is based on Tarjan's algorithm for detecting maximal strongly connected components of a directed graph. For the latter, a data structure is introduced that resembles BDDs and Covering Sharing Trees, but has additional heuristics designed for the present use. Results from a few experiments are shown. They demonstrate significant savings in running time and varying savings in memory consumption compared to an earlier state-of-the-art technique.
Research output: Contribution to journal › Article › Scientific › peer-review
Passenger transport is becoming more and more connected and multimodal. Instead of just taking a series of vehicles to complete a journey, the passenger is actually interacting with a connected cyber-physical social (CPS) transport system. In this study, we present a case study where big data from various sources is combined and analyzed to support and enhance the transport system in the Tampere region. Different types of static and real-time data sources and transportation related APIs are investigated. The goal is to find ways in which big data and collaborative networks can be used to improve the CPS transport system itself and the passenger satisfaction related to it. The study shows that even though the exploitation of big data does not directly improve the state of the physical transport infrastructure, it helps in utilizing more of its capacity. Secondly, the use of big data makes it more attractive to passengers.
jufoid=84293
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
At present, cellular coverage in many rural areas remains intermittent. Mobile operators may not be willing to deploy expensive network infrastructure to support low-demand regions. For that reason, solutions for the rapid deployment of base stations in areas with insufficient or damaged operator infrastructure are emerging. Utilization of unmanned aerial vehicles (UAVs) or drones serving as data relays holds significant promise for delivering on-demand connectivity as well as providing public safety services or aiding in recovery after communication infrastructure failures caused by natural disasters. The use of UAVs in provisioning high-rate radio connectivity and bringing it to remote locations is also envisioned as a potential application for fifth-generation (5G) communication systems. In this study, we introduce a prototype solution for an aerial base station, where connectivity between a drone and a base station is provided via a directional microwave link. Our prototype is equipped with a steering mechanism driven by a dedicated algorithm to support such connectivity. Our experimental results demonstrate early-stage connectivity and signal strength measurements that were gathered with our prototype. Our results are also compared against the free-space model. These findings support the emerging vision of aerial base stations as part of the 5G ecosystem and beyond.
EXT="Pyattaev, Alexander"
Research output: Contribution to journal › Article › Scientific › peer-review
The paper considers the possible use of computer vision systems for INS aiding. Two methods of navigation data obtaining from the image sequence are analyzed. The first method uses the features of architectural elements in indoor and urban conditions for generation of object attitude parameters. The second method is based on extraction of general features in the image and is more widely applied. Besides the orientation parameters, the second method estimates the object displacement, and thus can be used as visual odometry technique. The described algorithms can be used to develop small-sized MEMS navigation systems efficiently operating in urban conditions.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A method to adjust the mean-squared-errors (MSE) value for coded video quality assessment is investigated in this work by incorporating subjective human visual experience. First, we propose a linear model between the mean opinioin score (MOS) and a logarithmic function of the MSE value of coded video under a range of coding rates. This model is validated by experimental data. With further simplification, this model contains only one parameter to be determined by video characteristics. Next, we adopt a machine learing method to learn this parameter. Specifically, we select features to classify video content into groups, where videos in each group are more homoegeneous in their characteristics. Then, a proper model parameter can be trained and predicted within each video group. Experimental results on a coded video database are given to demonstrate the effectiveness of the proposed algorithm.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper we present a complex elevator system design structure matrix (DSM). The DSM is created with system experts to enable solving of complex system development problems via a product DSM. This data is created to be used as a case study in a DSM design sprint. It was created to show the diversity of findings that can be ascertained from a single DSM matrix. In the spirit of open science, we present both the DSM and the design sprint to enable other researched to replicate, reproduce or otherwise build on the same source of data.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer’s disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.
EXT="Tohka, Jussi"
Research output: Contribution to journal › Article › Scientific › peer-review
Context: Eliciting requirements from customers is a complex task. In Agile processes, the customer talks directly with the development team and often reports requirements in an unstructured way. The requirements elicitation process is up to the developers, who split it into user stories by means of different techniques. Objective: We aim to compare the requirements decomposition process of an unstructured process and three Agile processes, namely XP, Scrum, and Scrum with Kanban. Method: We conducted a multiple case study with a replication design, based on the project idea of an entrepreneur, a designer with no experience in software development. Four teams developed the project independently, using four different development processes. The requirements were elicited by the teams from the entrepreneur, who acted as product owner and was available to talk with the four groups during the project. Results: The teams decomposed the requirements using different techniques, based on the selected development process. Conclusion: Scrum with Kanban and XP resulted in the most effective processes from different points of view. Unexpectedly, decomposition techniques commonly adopted in traditional processes are still used in Agile processes, which may reduce project agility and performance. Therefore, we believe that decomposition techniques need to be addressed to a greater extent, both from the practitioners’ and the research points of view.
jufoid=71106
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Background: Molecular descriptors have been extensively used in the field of structure-oriented drug design and structural chemistry. They have been applied in QSPR and QSAR models to predict ADME-Tox properties, which specify essential features for drugs. Molecular descriptors capture chemical and structural information, but investigating their interpretation and meaning remains very challenging.Results: This paper introduces a large-scale database of molecular descriptors called COMMODE containing more than 25 million compounds originated from PubChem. About 2500 DRAGON-descriptors have been calculated for all compounds and integrated into this database, which is accessible through a web interface at http://commode.i-med.ac.at.
Research output: Contribution to journal › Article › Scientific › peer-review
The divergence similarity between two color images is presented based on the Jensen-Shannon divergence to measure the color-distribution similarity. Subjective assessment experiments were developed to obtain mean opinion scores (MOS) of test images. It was found that the divergence similarity and MOS values showed statistically significant correlations.
JUFOID=72850
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this preliminary research we examine the suitability of hierarchical strategies of multi-class support vector machines for classification of induced pluripotent stem cell (iPSC) colony images. The iPSC technology gives incredible possibilities for safe and patient specific drug therapy without any ethical problems. However, growing of iPSCs is a sensitive process and abnormalities may occur during the growing process. These abnormalities need to be recognized and the problem returns to image classification. We have a collection of 80 iPSC colony images where each one of the images is prelabeled by an expert to class bad, good or semigood. We use intensity histograms as features for classification and we evaluate histograms from the whole image and the colony area only having two datasets. We perform two feature reduction procedures for both datasets. In classification we examine how different hierarchical constructions effect the classification. We perform thorough evaluation and the best accuracy was around 54% obtained with the linear kernel function. Between different hierarchical structures, in many cases there are no significant changes in results. As a result, intensity histograms are a good baseline for the classification of iPSC colony images but more sophisticated feature extraction and reduction methods together with other classification methods need to be researched in future.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Digital Rights Management (DRM) is an important business enabler for digital content industry. Rights exporting is one of the crucial tasks in providing the interoperability of DRM. Trustworthy rights exporting is required by both the end users and the DRM systems. We propose a set of principles for trustworthy rights exporting by analysing the characteristic of rights exporting. Based on the principles, we provide some suggestions on how trustworthy rights exporting should be performed.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Professional
The unprecedented proliferation of smart devices together with novel communication, computing, and control technologies have paved the way for A-IoT. This development involves new categories of capable devices, such as high-end wearables, smart vehicles, and consumer drones aiming to enable efficient and collaborative utilization within the smart city paradigm. While massive deployments of these objects may enrich people's lives, unauthorized access to said equipment is potentially dangerous. Hence, highly secure human authentication mechanisms have to be designed. At the same time, human beings desire comfortable interaction with the devices they own on a daily basis, thus demanding authentication procedures to be seamless and user-friendly, mindful of contemporary urban dynamics. In response to these unique challenges, this work advocates for the adoption of multi-factor authentication for A-IoT, such that multiple heterogeneous methods - both well established and emerging - are combined intelligently to grant or deny access reliably. We thus discuss the pros and cons of various solutions as well as introduce tools to combine the authentication factors, with an emphasis on challenging smart city environments. We finally outline the open questions to shape future research efforts in this emerging field.
Research output: Contribution to journal › Article › Scientific › peer-review
Context: Global software development (GSD), although now a norm in the software industry, carries with it enormous challenges mostly regarding communication and coordination. Aforementioned challenges are highlighted when there is a need to transfer knowledge between sites, particularly when software artifacts assigned to different sites depend on each other. The design of the software architecture and associated task dependencies play a major role in reducing some of these challenges. Objective: The current literature does not provide a cohesive picture of how the distributed nature of software development is taken into account during the design phase: what to avoid, and what works in practice. The objective of this paper is to gain an understanding of software architecting in the context of GSD, in order to develop a framework of challenges and solutions that can be applied in both research and practice. Method: We conducted a systematic literature review (SLR) that synthesises (i) challenges which GSD imposes on software architecture design, and (ii) recommended practices to alleviate these challenges. Results: We produced a comprehensive set of guidelines for performing software architecture design in GSD based on 55 selected studies. Our framework comprises nine key challenges with 28 related concerns, and nine recommended practices, with 22 related concerns for software architecture design in GSD. These challenges and practices were mapped to a thematic conceptual model with the following concepts: Organization (Structure and Resources), Ways of Working (Architecture Knowledge Management, Change Management and Quality Management), Design Practices, Modularity and Task Allocation. Conclusion: The synthesis of findings resulted in a thematic conceptual model of the problem area, a mapping of the key challenges to practices, and a concern framework providing concrete questions to aid the design process in a distributed setting. This is a first step in creating more concrete architecture design practices and guidelines.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper considers a scenario when we have multiple pre-trained detectors for detecting an event and a small dataset for training a combined detection system. We build the combined detector as a Boolean function of thresholded detector scores and implement it as a binary classification cascade. The cascade structure is computationally efficient by providing the possibility to early termination. For the proposed Boolean combination function, the computational load of classification is reduced whenever the function becomes determinate before all the component detectors have been utilized. We also propose an algorithm, which selects all the needed thresholds for the component detectors within the proposed Boolean combination. We present results on two audio-visual datasets, which prove the efficiency of the proposed combination framework. We achieve state-of-the-art accuracy with substantially reduced computation time in laughter detection task, and our algorithm finds better thresholds for the component detectors within the Boolean combination than the other algorithms found in the literature.
Research output: Contribution to journal › Article › Scientific › peer-review
The software industry is in the middle of a major change in the fashion how services provided by all kinds of information systems are offered to the users. This change has been initiated by the customers who no longer want to carry out the same responsibilities and risks they previously did as system owners. Consequently, the software vendors need to find a way to change their mind-sets from software developers to service providers, being able to constantly satisfy the changed and new needs of their customers. The transformation from the license based software development to a SaaS offering poses challenges related not only to technical issues but to a great extent also to organisational and even mental issues. We reflect the experiences on this transformation gathered from two software companies, and, based on these, present some prerequisites and guidelines for the transformation to succeed. In conclusion, with the SaaS model, many of the principles manifested by the agile movement can and should be followed closely and the advantages gained with the SaaS model are very close to the objectives set by the agile manifesto.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The paper reports a test exploring how retrieved documents are browsed. The access point to the documents was varied - starting either from the beginning of the document or from the point where relevant information is located - to find out how much browsing and context the users need to judge relevance. Test results reveal different within-document browsing patterns.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The blockchain technology is currently penetrating different sides of modern ICT community. Most of the devices involved in blockchain-related processes are specially designed targeting only the mining aspect. At the same time, the use of wearable and mobile devices may also become a part of blockchain operation, especially during the charging time. The paper considers the possibility of using a large number of constrained devices supporting the operation of the blockchain. The utilization of such devices is expected to improve the efficiency of the system and also to attract a more substantial number of users. Authors propose a novel consensus algorithm based on a combination of Proof-of-Work (PoW), Proof-of-Activity (PoA), and Proof-of-Stake (PoS). The paper first overviews the existing strategies and further describes the developed cryptographic primitives used to build a blockchain involving mobile devices. A brief numerical evaluation of the designed system is also provided in the paper.
EXT="Zhidanov, Konstantin"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper provides an analytic performance evaluation of the bit error rate (BER) of underlay decode-and-forward cognitive networks with best relay selection over Rayleigh multipath fading channels. A generalized BER expression valid for arbitrary operational parameters is firstly presented in the form of a single integral, which is then employed for determining the diversity order and coding gain for different best relay selection scenarios. Furthermore, a novel and highly accurate closed-form approximate BER expression is derived for the specific case where relays are located relatively close to each other. The presented results are rather convenient to handle both analytically and numerically, while they are shown to be in good agreement with results from respective computer simulations. In addition, it is shown that as in the case of conventional relaying networks, the behaviour of underlay relaying cognitive networks with best relay selection depends significantly on the number of involved relays.
Research output: Contribution to journal › Article › Scientific › peer-review
We analyze barriers to task-based information access in molecular medicine, focusing on research tasks, which provide task performance sessions of varying complexity. Molecular medicine is a relevant domain because it offers thousands of digital resources as the information environment. Data were collected through shadowing of real work tasks. Thirty work task sessions were analyzed and barriers in these identified. The barriers were classified by their character (conceptual, syntactic, and technological) and by their context of appearance (work task, system integration, or system). Also, work task sessions were grouped into three complexity classes and the frequency of barriers of varying types across task complexity levels were analyzed. Our findings indicate that although most of the barriers are on system level, there is a quantum of barriers in integration and work task contexts. These barriers might be overcome through attention to the integrated use of multiple systems at least for the most frequent uses. This can be done by means of standardization and harmonization of the data and by taking the requirements of the work tasks into account in system design and development, because information access is seldom an end itself, but rather serves to reach the goals of work tasks.
Research output: Contribution to journal › Article › Scientific › peer-review
RVC-CAL is an actor-based dataflow language that enables concurrent, modular and portable description of signal processing algorithms. RVC-CAL programs can be compiled to implementation languages such as C/C++ and VHDL for producing software or hardware implementations. This paper presents a methodology for automatic discovery of piecewise-deterministic (quasi-static) execution schedules for RVC-CAL program software implementations. Quasi-static scheduling moves computational burden from the implementable run-time system to design-time compilation and thus enables making signal processing systems more efficient. The presented methodology divides the RVC-CAL program into segments and hierarchically detects quasi-static behavior from each segment: first at the level of actors and later at the level of the whole segment. Finally, a code generator creates a quasi-statically scheduled version of the program. The impact of segment based quasi-static scheduling is demonstrated by applying the methodology to several RVC-CAL programs that execute up to 58 % faster after applying the presented methodology.
Research output: Contribution to journal › Article › Scientific › peer-review
The difficulty of learning tasks is a major factor in learning, as is the feedback given to students. Even automatic feedback should ideally be influenced by student-dependent factors such as task difficulty. We report on a preliminary exploration of such indicators of programming assignment difficulty that can be automatically detected for each student from source code snapshots of the student's evolving code. Using a combination of different metrics emerged as a promising approach. In the future, our results may help provide students with personalized automatic feedback.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, a contiguous carrier aggregation scheme for the downlink transmissions in an inband full-duplex cellular network is analyzed. In particular, we consider a scenario where the base station transmits over a wider bandwidth than the mobiles, while both parties are still using the same center frequency. As a result, the mobiles must cancel their own self-interference over a wider bandwidth, when compared to a situation where the uplink and downlink frequency bands are symmetric. Furthermore, due to the inherent RF impairments in the mobile devices, nonlinear modeling of the self-interference is required in the digital domain to fully cancel it over the whole reception bandwidth. The feasibility of the proposed scheme is demonstrated with real-life RF measurements, using two different bandwidths. In both of these cases, it is shown that the SI can be attenuated below the receiver noise floor over the whole reception bandwidth.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Artificial Intelligence (AI) is one of the current emerging technologies. In the history of computing AI has been in the similar role earlier - almost every decade since the 1950s, when the programming language Lisp was invented and used to implement self-modifying applications. The second time that AI was described as one of the frontier technologies was in the 1970s, when Expert Systems (ES) were developed. A decade later AI was again at the forefront when the Japanese government initiated its research and development effort to develop an AI-based computer architecture called the Fifth Generation Computer System (FGCS). Currently in the 2010s, AI is again on the frontier in the form of (self-)learning systems manifesting in robot applications, smart hubs, intelligent data analytics, etc. What is the reason for the cyclic reincarnation of AI? This paper gives a brief description of the history of AI and also answers the question above. The current AI “cycle” has the capability to change the world in many ways. In the context of the CE conference, it is important to understand the changes it will cause in education, the skills expected in different professions, and in society at large.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Large-scale perturbation databases, such as ConnectivityMap (CMap) or Library of Integrated Network-based Cellular Signatures (LINCS), provide enormous opportunities for computational pharmacogenomics and drug design. A reason for this is that in contrast to classical pharmacology focusing at one target at a time, the transcriptomics profiles provided by CMap and LINCS open the door for systems biology approaches on the pathway and network level. In this article, we provide a review of recent developments in computational pharmacogenomics with respect to CMap and LINCS and related applications.
Research output: Contribution to journal › Article › Scientific › peer-review
Background. Architectural smells and code smells are symptoms of bad code or design that can cause different quality problems, such as faults, technical debt, or difficulties with maintenance and evolution. Some studies show that code smells and architectural smells often appear together in the same file. The correlation between code smells and architectural smells, however, is not clear yet; some studies on a limited set of projects have claimed that architectural smells can be derived from code smells, while other studies claim the opposite. Objective. The goal of this work is to understand whether architectural smells are independent from code smells or can be derived from a code smell or from one category of them. Method. We conducted a case study analyzing the correlations among 19 code smells, six categories of code smells, and four architectural smells. Results. The results show that architectural smells are correlated with code smells only in a very low number of occurrences and therefore cannot be derived from code smells. Conclusion. Architectural smells are independent from code smells, and therefore deserve special attention by researchers, who should investigate their actual harmfulness, and practitioners, who should consider whether and when to remove them.
Research output: Contribution to journal › Article › Scientific › peer-review
The Liquid Software metaphor refers to software that can operate seamlessly across multiple devices owned by one or multiple users. Liquid Software applications can take advantage of the computing, storage and communication resources available on all the devices owned by the user. Liquid Software applications can also dynamically migrate from one device to another, following the user’s attention and usage context. The key design goal in Liquid Software development is to minimize the additional efforts arising from multiple device ownership (e.g., installation, synchronization and general maintenance of personal computers, smartphones, tablets, home and car displays, and wearable devices), while keeping the users in full control of their devices, applications and data. In this paper we present the design space for Liquid Software, categorizing and discussing the most important architectural dimensions and technical choices. We also provide an introduction and comparison of two frameworks implementing Liquid Software capabilities in the context of the World Wide Web.
EXT="Mikkonen, Tommi"
EXT="Taivalsaari, Antero"
Research output: Contribution to journal › Article › Scientific › peer-review
A diversity of wireless technologies will collaborate to support the fifth-generation (5G) communication networks with their demanding applications and services. Despite decisive progress in many enabling solutions, next-generation cellular deployments may still suffer from a glaring lack of bandwidth due to inefficient utilization of radio spectrum, which calls for immediate action. To this end, several capable frameworks have recently emerged to all help the mobile network operators (MNOs) leverage the abundant frequency bands that are utilized lightly by other incumbents. Along these lines, the recent Licensed Shared Access (LSA) regulatory framework allows for controlled sharing of spectrum between an incumbent and a licensee, such as the MNO, which coexist geographically. This powerful concept has been subject to several early technology demonstrations that confirm its implementation feasibility. However, the full potential of LSA-based spectrum management can only become available if it is empowered to operate dynamically and at high space-Time-frequency granularity. Complementing the prior efforts, we in this work outline the functionality that is required by the LSA system to achieve the much needed flexible operation as well as report on the results of our respective live trial that employs a full-fledged commercial-grade cellular network deployment. Our practical results become instrumental to facilitate more dynamic bandwidth sharing and thus promise to advance on the degrees of spectrum utilization in future 5G systems without compromising the service quality of their users.
INT=elt, "Ponomarenko-Timofeev, Aleksey"
Research output: Contribution to journal › Article › Scientific › peer-review
Open Source Software development often resembles Agile models. In this paper, we report about our experience in using SCRUM for the development of an Open Source Software Java tool. With this work, we aim at answering the following research questions: 1) is it possible to switch successfully to the SCRUM methodology in an ongoing Open Source Software development process? 2) is it possible to apply SCRUM when the developers are geographically distributed? 3) does SCRUM help improve the quality of the product and the productivity of the process? We answer to these questions by identifying a set of measures and by comparing the data we collected before and after the introduction of SCRUM. The results seem to show that SCRUM can be introduced