In 2013 Li and Jin studied a particular type of fuzzy relational equations on finite sets, where the introduced min-bi-implication composition is based on Łukasiewicz equivalence. In this paper such fuzzy relation equations are studied on a more general level, namely complete residuated lattice valued fuzzy relation equations of type ⋀y∈Y(A(x,y)↔X(y)=B(x) are analyzed, and the existence of solutions S is studied. First a necessary condition for the existence of solution is established, then conditions for lower and upper limits of solutions are given, and finally sufficient conditions for the existence of the smallest and largest solutions, respectively, are characterized. If such general or global solutions do not exist, there might still be partial or point wise solutions; this is a novel way to study fuzzy relation equations. Such point wise solutions are studied on Łukasiewicz, Product and Gödel t-norm based residuated lattices on the real unit interval.
Research output: Contribution to journal › Article › Scientific › peer-review
The partial Hosoya polynomial (or briefly the partial H-polynomial) can be used to construct the well-known Hosoya polynomial. The ith coefficient of this polynomial, defined for an arbitrary vertex u of a graph G, is the number of vertices at distance i from u. The aim of this paper is to determine the partial H-polynomial of several well-known graphs and, then, to investigate the location of their zeros. To pursue, we characterize the structure of graphs with the minimum and the maximum modulus of the zeros of partial H-polynomial. Finally, we define another graph polynomial of the partial H-polynomial, see [9]. Also, we determine the unique positive root of this polynomial for particular graphs.
Research output: Contribution to journal › Article › Scientific › peer-review
Let S={x1,x2,…,xn} be a finite set of distinct positive integers. Throughout this article we assume that the set S is GCD closed. The LCM matrix [S] of the set S is defined to be the n×n matrix with lcm(xi,xj) as its ij element. The famous Bourque-Ligh conjecture used to state that the LCM matrix of a GCD closed set S is always invertible, but currently it is a well-known fact that any nontrivial LCM matrix is indefinite and under the right circumstances it can be even singular (even if the set S is assumed to be GCD closed). However, not much more is known about the inertia of LCM matrices in general. The ultimate goal of this article is to improve this situation. Assuming that S is a meet closed set we define an entirely new lattice-theoretic concept by saying that an element xi∈S generates a double-chain set in S if the set meetcl(CS(xi))∖CS(xi) can be expressed as a union of two disjoint chains (here the set CS(xi) consists of all the elements of the set S that are covered by xi and meetcl(CS(xi)) is the smallest meet closed subset of S that contains the set CS(xi)). We then proceed by studying the values of the Möbius function on sets in which every element generates a double-chain set and use the properties of the Möbius function to explain why the Bourque-Ligh conjecture holds in so many cases and fails in certain very specific instances. After that we turn our attention to the inertia and see that in some cases it is possible to determine the inertia of an LCM matrix simply by looking at the lattice-theoretic structure of (S,|) alone. Finally, we are going to show how to construct LCM matrices in which the majority of the eigenvalues is either negative or positive.
Research output: Contribution to journal › Article › Scientific › peer-review
Holography is usually considered as the ultimate way to visually reproduce a three-dimensional scene. Computer-generated holography constitutes an important branch of holography, which enables visualization of artificially generated scenes as well as real three-dimensional scenes recorded under white-light illumination. In this article, we present a comprehensive survey of methods for synthesis of computer-generated holograms, classifying them into two broad categories: wavefront-based methods and ray-based methods. We examine their modern implementations in terms of the quality of reconstruction and computational efficiency. As it is an integral part of computer-generated holography, we devote a special section to speckle suppression, which is also discussed under two categories following the classification of underlying computer-generated hologram methods.
Research output: Contribution to journal › Review Article › Scientific › peer-review
In this chapter, we motivate the use of densely-sampled light fields as the representation which can bring the required density of light rays for the correct recreation of 3D visual cues such as focus and continuous parallax and can serve as an intermediary between light field sensing and light field display. We consider the problem of reconstructing such a representation from few camera views and approach it in a sparsification framework. More specifically, we demonstrate that the light field is well structured in the set of so-called epipolar images and can be sparsely represented by a dictionary of directional and multi-scale atoms called shearlets. We present the corresponding regularization method, along with its main algorithm and speed-accelerating modifications. Finally, we illustrate its applicability for the cases of holographic stereograms and light field compression.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Chapter › Scientific › peer-review
The light field and holographic displays constitute two important categories of advanced three-dimensional displays that are aimed at delivering all physiological depth cues of the human visual system, such as stereo cues, motion parallax, and focus cues, with sufficient accuracy. As human observers are the end-users of such displays, the delivered spatial information (e.g., perceptual spatial resolution) and view-related image quality factors (e.g., focus cues) are significantly dependent on the characteristics of the human visual system. Retinal image formation models enable rigorous characterization and subsequently efficient design of light field and holographic displays. In this chapter the ray-based near-eye light field and wave-based near-eye holographic displays are reviewed, and the corresponding retinal image formation models are discussed. In particular, most of the discussion is devoted to characterization of the perceptual spatial resolution and focus cues.
Research output: Chapter in Book/Report/Conference proceeding › Chapter › Scientific › peer-review
We develop a game-theoretic semantics (GTS) for the fragment ATL+ of the alternating-time temporal logic ATL⁎, thereby extending the recently introduced GTS for ATL. We show that the game-theoretic semantics is equivalent to the standard compositional semantics of ATL+ with perfect-recall strategies. Based on the new semantics, we provide an analysis of the memory and time resources needed for model checking ATL+ and show that strategies of the verifier that use only a very limited amount of memory suffice. Furthermore, using the GTS, we provide a new algorithm for model checking ATL+ and identify a natural hierarchy of tractable fragments of ATL+ that substantially extend ATL.
Research output: Contribution to journal › Article › Scientific › peer-review
In the field of child-robot interaction (CRI), long-term field studies with users in authentic contexts are still rare. This paper reports the findings from a 4-month field study of robot-assisted language learning (RALL). We focus on the learning experiences of primary school pupils with a social, persuasive robot, and the experiences of the teachers of using the robot as a teaching tool. Our qualitative research approach includes interviews, observations, questionnaires and a diary as data collection methods, and affinity diagram as a data analysis method. The research involves three target groups: the pupils of a 3rd grade class (9–10 years old, n = 20), language teachers (n = 3) and the parents (n = 18). We report findings on user experience (UX), the robot’s tasks and role in the school, and the experience of the multimodal interaction with the robot. Based on the findings, we discuss several aspects concerning the design of persuasive robotics on robot-assisted learning and CRI, for example the benefits of robot-specific ways of rewarding, the value of the physical embodiment and the opportunities of the social role adopted by the learning robot.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Due to potential dynamic interactions among dc microgrid power converters, the performance of some of their control loops can vary from the designed behavior. Thus, online monitoring of different control loops within a dc microgrid power converter is highly desirable. This paper proposes the simultaneous identification of several control loops within dc microgrid power converters, by injecting orthogonal pseudo-random binary sequences (PRBSs), and measuring all the loop gains in one measurement cycle. The identification results can be used for different purposes such as controller autotuning, impedance shaping, etc. Herein, an example of output impedance estimation and shaping based on locally-measured loop gains is presented. The proposed identification technique and its application in output impedance shaping are validated on an experimental dc microgrid prototype, composed of three droop-controlled power converters.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper presents some less known details about the work of Yasuo Komamiya in development of the first relay computers using the theory of computing networks that is based on the former work of Oohashi Kan-ichi and Mochiori Goto at the Electrotechnical Laboratory (ETL) of Agency of Industrial Science and Technology, Tokyo, Japan. The work at ETL in the same direction was performed under guidance of Mochinori Goto.
EXT="Stanković, Radomir S."
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
For video applications, tiled streaming is a popular way to deliver viewport dependent 360-degree video. Unfortunately, dynamic adaptation to network bandwidth fluctuations of such video streams is still a challenge. This paper proposes a method for managing in a controlled way the graceful quality degradation in DASH-based streaming systems to deliver omnidirectional video. The method is enabled by the signaling of tile priority maps in order to reduce the impact of graceful degradation on the users’ Quality of Experience (QoE). Simulation results show that this method allows degrading the system by over 10% of minor bandwidth usage during Viewport Dependent Streaming, without sacrificing the user QoE so much compared to the case that does not make use of this technique. Furthermore, the presented method improves flexibility from the service provider’s standpoint.
INT=comp,"Monakhov, Dmitrii"
JUFOID=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper studies vehicle attribute recognition by appearance. In the literature, image-based target recognition has been extensively investigated in many use cases, such as facial recognition, but less so in the field of vehicle attribute recognition. We survey a number of algorithms that identify vehicle properties ranging from coarse-grained level (vehicle type) to fine-grained level (vehicle make and model). Moreover, we discuss two alternative approaches for these tasks, including straightforward classification and a more flexible metric learning method. Furthermore, we design a simulated real-world scenario for vehicle attribute recognition and present an experimental comparison of the two approaches.
Research output: Contribution to journal › Article › Scientific › peer-review
A major challenge in modelling and simulation is the need to combine expertise in both software technologies and a given scientific domain. When High-Performance Computing (HPC) is required to solve a scientific problem, software development becomes a problematic issue. Considering the complexity of the software for HPC, it is useful to identify programming languages that can be used to alleviate this issue. Because the existing literature on the topic of HPC is very dispersed, we performed a Systematic Mapping Study (SMS) in the context of the European COST Action cHiPSet. This literature study maps characteristics of various programming languages for data-intensive HPC applications, including category, typical user profiles, effectiveness, and type of articles. We organised the SMS in two phases. In the first phase, relevant articles are identified employing an automated keyword-based search in eight digital libraries. This lead to an initial sample of 420 papers, which was then narrowed down in a second phase by human inspection of article abstracts, titles and keywords to 152 relevant articles published in the period 2006–2018. The analysis of these articles enabled us to identify 26 programming languages referred to in 33 of relevant articles. We compared the outcome of the mapping study with results of our questionnaire-based survey that involved 57 HPC experts. The mapping study and the survey revealed that the desired features of programming languages for data-intensive HPC applications are portability, performance and usability. Furthermore, we observed that the majority of the programming languages used in the context of data-intensive HPC applications are text-based general-purpose programming languages. Typically these have a steep learning curve, which makes them difficult to adopt. We believe that the outcome of this study will inspire future research and development in programming languages for data-intensive HPC applications.
Research output: Contribution to journal › Article › Scientific › peer-review
We investigate the computational complexity of the satisfiability problem of modal inclusion logic. We distinguish two variants of the problem: one for the strict and another one for the lax semantics. Both problems turn out to be EXPTIME-complete on general structures. Finally,we showhowfor a specific class of structures NEXPTIME-completeness for these problems under strict semantics can be achieved.
DUPL=50949587
Research output: Contribution to journal › Article › Scientific › peer-review
Propositional and modal inclusion logic are formalisms that belong to the family of logics based on team semantics. This article investigates the model checking and validity problems of these logics. We identify complexity bounds for both problems, covering both lax and strict team semantics. By doing so, we come close to finalizing the programme that aims to completely classify the complexities of the basic reasoning problems for modal and propositional dependence, independence and inclusion logics.
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper, we extend and generalize the spectral theory of undirected networks towards directed networks by introducing the Hermitian normalized Laplacian matrix for directed networks. In order to start, we discuss the Courant–Fischer theorem for the eigenvalues of Hermitian normalized Laplacian matrix. Based on the Courant–Fischer theorem, we obtain a similar result towards the normalized Laplacian matrix of undirected networks: for each i ∈ {1, 2,…, n}, any eigenvalue of Hermitian normalized Laplacian matrix λ i ∈ [0, 2]. Moreover, we prove some special conditions if 0, or 2 is an eigenvalue of the Hermitian normalized Laplacian matrix L(X). On top of that, we investigate the symmetry of the eigenvalues of L(X)and the edge-version for the eigenvalue interlacing result. Finally we present two expressions for the coefficients of the characteristic polynomial of the Hermitian normalized Laplacian matrix. As an outlook, we sketch some novel and intriguing problems to which our apparatus could generally be applied.
Research output: Contribution to journal › Article › Scientific › peer-review
We present a high-performance implementation of the lattice-Boltzmann method (LBM) on the Knights Landing generation of Xeon Phi. The Knights Landing architecture includes 16GB of high-speed memory (MCDRAM) with a reported bandwidth of over 400 GB/s, and a subset of the AVX-512 single instruction multiple data (SIMD) instruction set. We explain five critical implementation aspects for high performance on this architecture: (1) the choice of appropriate LBM algorithm, (2) suitable data layout, (3) vectorization of the computation, (4) data prefetching, and (5) running our LBM simulations exclusively from the MCDRAM. The effects of these implementation aspects on the computational performance are demonstrated with the lattice-Boltzmann scheme involving the D3Q19 discrete velocity set and the TRT collision operator. In our benchmark simulations of fluid flow through porous media, using double-precision floating-point arithmetic, the observed performance exceeds 960 million fluid lattice site updates per second.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper presents an analysis of an efficient parallel implementation of the active-set Newton algorithm (ASNA), which is used to estimate the nonnegative weights of linear combinations of the atoms in a large-scale dictionary to approximate an observation vector by minimizing the Kullback–Leibler divergence between the observation vector and the approximation. The performance of ASNA has been proved in previous works against other state-of-the-art methods. The implementations analysed in this paper have been developed in C, using parallel programming techniques to obtain a better performance in multicore architectures than the original MATLAB implementation. Also a hardware analysis is performed to check the influence of CPU frequency and number of CPU cores in the different implementations proposed. The new implementations allow ASNA algorithm to tackle real-time problems due to the execution time reduction obtained.
Research output: Contribution to journal › Article › Scientific › peer-review
Structural properties of graphs and networks have been investigated across scientific disciplines ranging from mathematics to structural chemistry. Structural branching, cyclicity and, more generally, connectedness are well-known examples of such properties. In particular, various graph measures for detecting structural branching and cyclicity have been investigated. These measures are of limited applicability since their interpretation relies heavily on a certain definition of structural branching. In this paper we define a related measure, taking an approach to measurement similar to that of Lovász and Pelikán (On the eigenvalues of trees, Periodica Mathematica Hungarica, Vol. 3 (1–2), 1973, 175–182). We define a complex valued polynomial which also has a unique positive root. Analytical and numerical results demonstrate that this measure can be interpreted as a structural branching and cyclicity measure for graphs. Our results generalize the work of Lovász and Pelikán since the measure we introduce is not restricted to trees.
Research output: Contribution to journal › Article › Scientific › peer-review
ALMARVI is a collaborative European research project funded by Artemis involving 16 industrial as well as academic partners across 4 countries, working together to address various computational challenges in image and video processing in 3 application domains: healthcare, surveillance and mobile. This paper is an editorial for a special issue discussing the integrated system created by the partners to serve as a cross-domain solution for the project. The paper also introduces the partner articles published in this special issue to discuss the various technological developments achieved within ALMARVI spanning all system layers, from hardware to applications. We illustrate the challenges faced within the project based on use cases from the three targeted application domains, and how these can address the 4 main project objectives addressing 4 challenges faced by high performance image and video processing systems: massive data rate, low power consumption, composability and robustness. We present a system stack composed of algorithms, design frameworks and platforms as a solution to these challenges. Finally, the use cases from the three different application domains are mapped on the system stack solution and are evaluated based on their performance for each of the 4 ALMARVI objectives.
Research output: Contribution to journal › Article › Scientific › peer-review
The present paper illustrates that the game-based implementation of a learning task - here to train basic math skills - entails benefits with strings attached. We developed a game for learning math with its core element based on the number line estimation task. In this task, participants have to indicate the position of a target number on a number-line, which is thought to train basic numerical skills. Participants completed both the game on a mobile device and a conventional paper-pencil version of the task. They indicated to have significantly more fun using the game-based environment. However, they also made considerably higher estimation errors in the game compared to the paper-pencil version. In this case, more fun in a math-learning task was ultimately bought at the expense of lower reliability, namely lowered accuracy of estimations in the learning game. This fun-accuracy trade-off between adding elements for enjoyment and clarity of content is discussed together with the consequences for game-design.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We study a variant ATLFB of the alternating-time temporal logic ATL with a non-standard, ‘finitely bounded’ semantics (FBS). FBS was originally defined as a game-theoretic semantics where players must commit to time limits when attempting to verify eventuality (respectively, to falsify safety) formulae. It turns out that FBS has a natural corresponding compositional semantics that essentially evaluates formulae only on finite initial segments of paths and imposes uniform bounds on all plays for the fulfilment of eventualities. The resulting version ATLFB differs in some essential features from the standard ATL, as it no longer has the finite model property, though the two logics are equivalent on finite models. We develop two tableau systems for ATLFB. The first one deals with infinite sets of formulae and may run in a transfinite sequence of steps, whereas the second one deals only with finite sets of formulae in an extended language allowing explicit symbolic indication of time limits in formulae. We prove soundness and completeness of the infinitary tableau system and prove that it is equivalent to the finitary one. We also show that the finitary tableau system provides an exponential-time decision procedure for the satisfiability problem of ATLFB and thus establishes its EXPTIME-completeness. Furthermore, we present an infinitary axiomatization for ATLFB and prove its soundness and completeness.
dupl=49136187
Research output: Contribution to journal › Article › Scientific › peer-review
The recently standardized millimeter wave-based 3GPP New Radio technology is expected to become an enabler for both enhanced Mobile Broadband (eMBB) and ultra-reliable low latency communication (URLLC) services specified to future 5G systems. One of the first steps in mathematical modeling of such systems is the characterization of the session resource request probability mass function (pmf) as a function of the channel conditions, cell size, application demands, user location and system parameters including modulation and coding schemes employed at the air interface. Unfortunately, this pmf cannot be expressed via elementary functions. In this paper, we develop an accurate approximation of the sought pmf. First, we show that Normal distribution provides a fairly accurate approximation to the cumulative distribution function (CDF) of the signal-to-noise ratio for communication systems operating in the millimeter frequency band, further allowing evaluating the resource request pmf via error function. We also investigate the impact of shadow fading on the resource request pmf.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The number of Unmanned Aerial Vehicle (UAV) applications is growing tremendously. The most critical ones are operations in use cases such as natural disasters, and search and rescue activities. Many of these operations are performed on water scenarios. A standalone niche covering autonomous UAV operation is thus becoming increasingly important. One of the crucial parts of mentioned operations is a technology capable to land an autonomous UAV on a moving surface vessel. This approach could not be entirely possible without precise UAV positioning. However, conventional strategies that rely on satellite localization may not always be reliable, due to scenario specifics. Therefore, the development of an independent precise landing technology is essential. In this paper, we developed the localization and landing system based on Gauss-Newton’s method, which allows to achieve the required localization accuracy.
jufoid=62555
EXT="Pyattaev, Alexander"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The prospective roll out of recently standardized New Radio (NR) systems operating in millimeter wave frequency band pose unique challenges to network engineers. In this context, the support of NR-based vehicle-to-infrastructure communications is of special interest due to potentially high speeds of user equipment and semi-stochastic dynamic blockage conditions of propagation paths between UE and BR base station (BS). In this conditions even the use of advanced NR functionalities such as multiconnectivity supporting active connections to multiple BSs located nearby may not fully eliminate outages. Thus, to preserve session continuity for UEs located on vehicles a degree of LTE support might be required. In this paper, we quantify the amount of LTE support required to maintain session continuity in street deployment of NR systems supporting multiconnectivity capabilities. Particularly, we demonstrate that it is heavily affected by the traffic conditions, inter-site distance between NR BSs and the degree of multiconnectivity.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Design of attractive services for the bus travel context is important because of the aim to increase the usage of sustainable travel modes of public transportation. In bus travel, both user experience of the digital services and the broader service design context of the public transportation need to be addressed. Experience-Driven Design (EDD) can be used to take the passengers’ needs and experiences in the core of the design process. This paper presents a qualitative diary and interview study on bus travel experience with 20 passengers in two major cities in Finland. The aim of this study was to identify and communicate frequent bus passengers’ needs, experiences, values and activities as user insights to support experience-driven service design in the public transportation context. Based on the data analysis, we derived ten Travel Mindsets: Abstracted, Efficient, Enjoyer, In-control, Isolation, Observer, Off-line, Relaxed, Sensitive, and Social. To communicate the study findings on bus passengers’ travel experience, Travel Experience Personas were created. The personas include primary and secondary travel mindsets, specific needs related to bus travel, insights on mobile device usage, and target user experience (UX) goals that could enhance the personas’ travel experience. We also discuss how the personas can be used as a communicative design tool that supports EDD of novel services in the bus context.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, we present a high data rate implementation of a digital predistortion (DPD) algorithm on a modern mobile multicore CPU containing an on-chip GPU. The proposed implementation is capable of running in real-time, thanks to the execution of the predistortion stage inside the GPU, and the execution of the learning stage on a separate CPU core. This configuration, combined with the low complexity DPD design, allows for more than 400 Msamples/s sample rates. This is sufficient for satisfying 5G new radio (NR) base station radio transmission specifications in the sub-6 GHz bands, where signal bandwidths up to 100 MHz are specified. The linearization performance is validated with RF measurements on two base station power amplifiers at 3.7 GHz, showing that the 5G NR downlink emission requirements are satisfied.
INT=comp,"Meirhaeghe, Alexandre"
Research output: Contribution to journal › Article › Scientific › peer-review
We investigate the decidability of the emptiness problem for three classes of distributed automata. These devices operate on finite directed graphs, acting as networks of identical finite-state machines that communicate in an infinite sequence of synchronous rounds. The problem is shown to be decidable in LOGSPACE for a class of forgetful automata, where the nodes see the messages received from their neighbors but cannot remember their own state. When restricted to the appropriate families of graphs, these forgetful automata are equivalent to classical finite word automata, but strictly more expressive than finite tree automata. On the other hand, we also show that the emptiness problem is undecidable in general. This already holds for two heavily restricted classes of distributed automata: those that reject immediately if they receive more than one message per round, and those whose state diagram must be acyclic except for self-loops. Additionally, to demonstrate the flexibility of distributed automata in simulating different models of computation, we provide a characterization of constraint satisfaction problems by identifying a class of automata with exactly the same computational power.
Research output: Contribution to journal › Article › Scientific › peer-review
The future 5G New Radio (NR) systems are expected to support both multicast and unicast traffic. However, these traffic types require principally different NR system parameters. Particularly, the area covered by a single antenna configuration needs to be maximized when serving multicast traffic to efficiently use system resources. This prevents the system from using the maximum allowed number of antenna elements decreasing the inter-site distance between NR base stations. In this paper, we formulate a model of NR system with multi-connectivity capability serving a mixture of unicast and multicast traffic types. We show that multi-connectivity enables a trade-off between new and ongoing session drop probabilities for both unicast and multicast traffic types. Furthermore, supporting just two simultaneously active links allows to exploit most of the gains and the value of adding additional links is negligible. We also show that the service specifics implicitly prioritize multicast sessions over unicast ones. If one needs to achieve a balance between unicast and multicast session drop probabilities, explicit prioritization mechanism is needed at NR base stations.
EXT="Pyattaev, Alexander"
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper reports results from an ongoing project that aims to develop a digital game for introducing fractions to young children. In the current study, third-graders played the Number Trace Fractions prototype in which they estimated fraction locations and compared fraction magnitudes on a number line. The intervention consisted of five 30 min playing sessions. Conceptual fraction knowledge was assessed with a paper based pre- and posttest. Additionally, after the intervention students’ fraction comparison strategies were explored with game-based comparison tasks including self-explanation prompts. The results support previous findings indicating that game-based interventions emphasizing fraction magnitudes improve students’ performance in conceptual fraction tasks. Nevertheless, the results revealed that in spite of clear improvement many students tended to use false fraction magnitude comparison strategies after the intervention. It seems that the game mechanics and the feedback that the game provided did not support conceptual change processes of students with low prior knowledge well enough and common fraction misconceptions still existed. Based on these findings we further developed the game and extended it with physical manipulatives. The aim of this extension is to help students to overcome misconceptions about fraction magnitude by physically interacting with manipulatives.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Human body parsing remains a challenging problem in natural scenes due to multi-instance and inter-part semantic confusions as well as occlusions. This paper proposes a novel approach to decomposing multiple human bodies into semantic part regions in unconstrained environments. Specifically we propose a convolutional neural network (CNN) architecture which comprises of novel semantic and contour attention mechanisms across feature hierarchy to resolve the semantic ambiguities and boundary localization issues related to semantic body parsing. We further propose to encode estimated pose as higher-level contextual information which is combined with local semantic cues in a novel graphical model in a principled manner. In this proposed model, the lower-level semantic cues can be recursively updated by propagating higher-level contextual information from estimated pose and vice versa across the graph, so as to alleviate erroneous pose information and pixel level predictions. We further propose an optimization technique to efficiently derive the solutions. Our proposed method achieves the state-of-art results on the challenging Pascal Person-Part dataset.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Organizations often adopt enterprise architecture (EA) when planning how best to develop their information technology (IT) or businesses, for strategic management, or generally for managing change initiatives. This variety of different uses affects many stakeholders within and between organizations. Because stakeholders have dissimilar backgrounds, positions, assumptions, and activities, they respond differently to changes and the potential problems that emerge from those changes. This situation creates contradictions and conflicts between stakeholders that may further influence project activities and ultimately determine how EA is adopted. In this paper, we examine how institutional pressures influence EA adoption. Based on a qualitative case study of two cases, we show how regulative, normative, and cognitive pressures influence stakeholders’ activities and behaviors during the process of EA adoption. Our contribution thus lies in identifying roles of institutional pressures in different phases during the process of EA adoption and how it changes overtime. The results provide insights into EA adoption and the process of institutionalization, which help to explain emergent challenges in EA adoption.
EXT="Dang, Duong"
Research output: Contribution to journal › Article › Scientific › peer-review
In 5G networks we expect femtocells, mmWave and D2D communications to take over the more typical long-range cellular architectures with pre-planned radio resources. However, as the connection length between the nodes become shorter, locating feasible, non-interfering combinations of the links becomes more and more difficult. In this paper a new approach to this problem is presented. In particular, through guided heuristic search, it is possible to locate non-interfering combinations of wireless connections in a highly effective manner. The approach enables operators to deploy centralized scheduling solutions for emerging technologies such as network-assisted WiFi-Direct and LTE Direct, and others, especially those which lack efficient medium arbitration mechanisms.
EXT="Pyattaev, Alexander"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Image-to-image translation is a general name for a task where an image from one domain is converted to a corresponding image in another domain, given sufficient training data. Traditionally different approaches have been proposed depending on whether aligned image pairs or two sets of (unaligned) examples from both domains are available for training. While paired training samples might be difficult to obtain, the unpaired setup leads to a highly under-constrained problem and inferior results. In this paper, we propose a new general purpose image-to-image translation model that is able to utilize both paired and unpaired training data simultaneously. We compare our method with two strong baselines and obtain both qualitatively and quantitatively improved results. Our model outperforms the baselines also in the case of purely paired and unpaired training data. To our knowledge, this is the first work to consider such hybrid setup in image-to-image translation.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Gaming is acknowledged as a natural way of learning and established as a mainstream activity. Nevertheless, gaming performance and subjective game experience were hardly examined across adult age groups for which the game was not intended to. In contrast to serious games as specific tools against a natural, age-related decline in cognitive performance, we evaluated performance and subjective experiences of the established math learning game Semideus across three age groups from 19 to 79. Observed decline in performance in terms of processing speed were not exclusively predicted by age, but also by gaming frequency. Strongest age-related drops of processing speed were found for the middle-aged group aged 35 to 59 years. On the other hand, more knowledge-dependent performance measures like the amount of correctly solved problems remained comparably stable. According to subjective ratings, the middle-aged group experienced the game as less fluent and automatic compared to the younger and older groups. Additionally, the elderly group of participants reported fewer negative attitudes towards technology than both younger groups. We conclude that, albeit performance differences with respect to processing speed, subjective gaming experience stayed on an overall high positive level. This further encourages the use of games for learning across age.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In millimeter-wave (mmWave) networks, where faster signal attenuation is compensated by the use of highly directional antennas, the effects of high mobility may seriously harm the link quality and, hence, the overall system performance. In this paper, we study the channel access in unlicensed mmWave networks with mobile clients, with particular emphasis on initial beamforming training and beam refinement protocol as per IEEE 802.11ad/ay standard. We explicitly model beamforming procedures and corresponding overhead for directional mmWave antennas and provide a method for maximizing the average data rate over the variable length of the 802.11ad/ay beacon interval in different mobility scenarios. We illustrate the impact of the client speed and mobility patterns by examples of three variations of the discrete random walk mobility model.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Secure cloud storage is considered as one of the most important problems that both businesses and end-users take into account before moving their private data to the cloud. Lately, we have seen some interesting approaches that are based either on the promising concept of Symmetric Searchable Encryption (SSE) or on the well-studied field of Attribute-Based Encryption (ABE). Our construction, MicroSCOPE, combines both ABE and SSE to utilize the advantages of each technique. Finally, we enhance our construction with an access control mechanism by utilizing the functionality provided by SGX.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Despite the immense success of deep neural networks, their applicability is limited because they can be fooled by adversarial examples, which are generated by adding visually imperceptible and structured perturbations to the original image. Semantic segmentation is required in several visual recognition tasks, but unlike image classification, only a few studies are available for attacking semantic segmentation networks. The existing semantic segmentation adversarial attacks employ different gradient based loss functions which are defined using only the last layer of the network for gradient backpropogation. But some components of semantic segmentation networks implicitly mitigate several adversarial attacks (like multiscale analysis) due to which the existing attacks perform poorly. This provides us the motivation to introduce a new attack in this paper known as MLAttack, i.e., Multiple Layers Attack. It carefully selects several layers and use them to define a loss function for gradient based adversarial attack on semantic segmentation architectures. Experiments conducted on publicly available dataset using the state-of-the-art segmentation network architectures, demonstrate that MLAttack performs better than existing state-of-the-art semantic segmentation attacks.
EXT="Gupta, Puneet"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Latency is an important metric of mobile applications performance. To reduce the latency, recently it was proposed to replace the standard centralized architecture of mobile applications by the mobile edge computing (MEC). Such an approach allows processing of users data closer to their location. Motivated by disaster response scenarios, in this paper we investigated the capabilities of MEC for the forwarding of first aid request as an illustrative example of P2P service discovery in an emergency situation. We proposed an analytical model of the system and executed performance evaluation using system level simulator. Our results show that the developed solution considerably reduces the request processing time. The proposed solution can be used not only for first aid but also for general purposes, e.g., searching various service providers in a certain location.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper studies near lossless JPEG image compression. A method of estimation of image regions masking ability (maximal level of distortions invisible for human visual system) using non-predictable energy of image regions is described. A novel method of zeroing quantized DCT coefficients of JPEG images to increase their compression ratio without introducing visible distortions is proposed. A numerical analysis of effectiveness of the proposed near lossless compression method using 300 noise free test images of TAMPERE17 database is carried out. It is shown that the proposed method provides an increase of compression ratio of JPEG images without visible distortions at about 1.35 times in average. Additionally, the proposed method results in decreasing of variability of compression ratio values for different images. It is shown that the proposed method increases minimal compression ratio for highly textured JPEG images from 1.1…1.5 times to 2 times. Carried out experiments demonstrated once again that the traditional PSNR metric does not correspond to human perception for this task.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We propose a novel approach for modeling semantic contextual relationships in videos. This graph-based model enables the learning and propagation of higher-level spatial-temporal contexts to facilitate the semantic labeling of local regions. We introduce an exemplar-based nonparametric view of contextual cues, where the inherent relationships implied by object hypotheses are encoded on a similarity graph of regions. Contextual relationships learning and propagation are performed to estimate the pairwise contexts between all pairs of unlabeled local regions. Our algorithm integrates the learned contexts into a Conditional Random Field (CRF) in the form of pairwise potentials and infers the per-region semantic labels. We evaluate our approach on the challenging YouTube-Objects dataset which shows that the proposed contextual relationship model outperforms the state-of-the-art methods.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The evolution of the Web browser has been organic, with new features introduced on a pragmatic basis rather than following a clear rational design. This evolution has resulted in a cornucopia of overlapping features and redundant choices for developing Web applications. These choices include multiple architecture and rendering models, different communication primitives and protocols, and a variety of local storage mechanisms. In this position paper we examine the underlying reasons for this historic evolution. We argue that without a sound engineering approach and some fundamental rethinking there will be a growing risk that the Web may no longer be a viable, open software platform in the long run.
EXT="Mikkonen, Tommi"
jufoid=62555
EXT="Taivalsaari, Antero"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Functional safety is involved in many machines, processes, and systems to mitigate risks by reducing the likelihood of the occurrence or the severity of the consequences of a hazard. The development of functional safety systems realising safety functions is typically directed by laws and standards, which set requirements on the development process and design of the system. In addition, functional safety systems often operate in a context, in which other control entities also affect the operation of the system under control. In this article, nine patterns considering the design and development functional safety systems, in terms of their architecture and co-operation with other controlling entities, are presented. The purpose of the patterns is to support the designers of functional safety systems to cope with the mentioned aspects.
jufoid=81923
Research output: Chapter in Book/Report/Conference proceeding › Chapter › Scientific › peer-review
Distributed control systems comprise networked computing units that monitor and control physical processes in feedback loops. Reliability of these systems is affected by dynamic and complex computing environments where connections and system configurations may change rapidly. Diverse redundancy can be effective in improving system dependability, but it is susceptible to common mode failures and development costs for design diversity are often seen as prohibitive. In this paper we present three patterns that can be used to provide light-weight form of fault tolerance to improve system dependability and resilience by providing ability to cope with unexpected events and faults. These patterns are presented together with a pattern language that shows how they relate to other fault tolerance patterns.
jufoid=81923
Research output: Chapter in Book/Report/Conference proceeding › Chapter › Scientific › peer-review
Due to growing throughput demands dictated by innovative media applications (e.g., 360°) video streaming, augmented and virtual reality), millimeter-wave (mmWave) wireless access is considered to be a promising technology enabler for the emerging mobile networks. One of the crucial usages for such systems is indoor public protection and disaster relief (PPDR) missions, which may greatly benefit from higher mmWave bandwidths. In this paper, we assess the performance of on-demand mmWave mesh topologies in indoor environments. The evaluation was conducted by utilizing our system-level simulation framework based on a realistic floor layout under dynamic blockage conditions, 3GPP propagation model, mobile nodes, and multi-connectivity operation. Our numerical results revealed that the use of multi-connectivity capabilities in indoor deployments allows for generally improved connectivity performance whereas the associated per-node throughput growth is marginal. The latter is due to the blockage-rich environment, which is typical for indoor layouts as it distinguishes these from outdoor cases. Furthermore, the number of simultaneously supported links at each node that is required to enhance the system performance is greater than two, thus imposing considerable control overheads.
INT=elen,"Saqib, Md Nazmus"
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The problem of predicting a novel view of the scene using an arbitrary number of observations is a challenging problem for computers as well as for humans. This paper introduces the Generative Adversarial Query Network (GAQN), a general learning framework for novel view synthesis that combines Generative Query Network (GQN) and Generative Adversarial Networks (GANs). The conventional GQN encodes input views into a latent representation that is used to generate a new view through a recurrent variational decoder. The proposed GAQN builds on this work by adding two novel aspects: First, we extend the current GQN architecture with an adversarial loss function for improving the visual quality and convergence speed. Second, we introduce a feature-matching loss function for stabilizing the training procedure. The experiments demonstrate that GAQN is able to produce high-quality results and faster convergence compared to the conventional approach.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In the wake of recent hardware developments, augmented, mixed, and virtual reality applications – grouped under an umbrella term of eXtended reality (XR) – are believed to have a transformative effect on customer experience. Among many XR use cases, of particular interest are crowded commuting scenarios, in which passengers are involved in in-bus/in-train entertainment, e.g., high-quality video or 3D hologram streaming and AR/VR gaming. In the case of a city bus, the number of commuting users during the busy hours may exceed forty, and, hence, could pose far higher traffic demands than the existing microwave technologies can support. Consequently, the carrier candidate for XR hardware should be sought in the millimeter-wave (mmWave) spectrum; however, the use of mmWave cellular frequencies may appear impractical due to the severe attenuation or blockage by the modern metal coating of the glass. As a result, intra-vehicle deployment of unlicensed mmWave access points becomes the most promising solution for bandwidth-hungry XR devices. In this paper, we present the calibrated results of shooting-and-bouncing ray simulation at 60 GHz for the bus interior. We analyze the delay and angular spread, estimate the parameters of the Saleh-Valenzuela channel model, and draw important practical conclusions regarding the intra-vehicle propagation at 60 GHz.
INT=elen,"Ponomarenko-Timofeev, Aleksei"
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We study rewritability of monadic disjunctive Datalog programs, (the complements of) MMSNP sentences, and ontology-mediated queries (OMQs) based on expressive description logics of the ALC family and on conjunctive queries. We show that rewritability into FO and into monadic Datalog (MDLog) are decidable, and that rewritability into Datalog is decidable when the original query satisfies a certain condition related to equality. We establish 2NExpTime-completeness for all studied problems except rewritability into MDLog for which there remains a gap between 2NExpTime and 3ExpTime. We also analyze the shape of rewritings, which in the case of MMSNP correspond to obstructions, and give a new construction of canonical Datalog programs that is more elementary than existing ones and also applies to non-Boolean queries.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper presents a novel method which simultaneously learns the number of filters and network features repeatedly over multiple epochs. We propose a novel pruning loss to explicitly enforces the optimizer to focus on promising candidate filters while suppressing contributions of less relevant ones. In the meanwhile, we further propose to enforce the diversities between filters and this diversity-based regularization term improves the trade-off between model sizes and accuracies. It turns out the interplay between architecture and feature optimizations improves the final compressed models, and the proposed method is compared favorably to existing methods, in terms of both models sizes and accuracies for a wide range of applications including image classification, image compression and audio classification.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
It is argued that we are witnessing a paradigmatic shift toward constructionist gaming in which students design games instead of just consuming them. However, only a limited number of studies have explored teaching of educational Game Design (GD). This paper reports a case study in which learning by designing games strategy was used to teach different viewpoints of educational GD. In order to support design activities, we proposed a CIMDELA (Content, Instruction, Mechanics, Dynamics, Engagement, Learning Analytics) framework that aims to align game design and instructional design aspects. Thirty under-graduate students participated in the gamified workshop and designed math games in teams. The activities were divided into eight rounds consisting of design decisions and game testing. The workshop activities were observed and the designed games saved. Most of the students were engaged in the design activities and particularly the approach that allowed students to test the evolving game after each round, motivated students. Observations revealed that some of the students had isolated design mindset in the beginning and they had problems to consider design decisions from game design and instructional perspectives, but team-based design activities often led to fruitful debate with co-designers and helped some students to expand their mindsets.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Cryptographic libraries often feature multiple implementations of primitives to meet both the security needs of handling private information and the performance requirements of modern services when the handled information is public. OpenSSL, the de-facto standard free and open source cryptographic library, includes mechanisms to differentiate the confidential data and its control flow, including run-time flags, designed for hardening against timing side-channels, but repeatedly accidentally mishandled in the past. To analyze and prevent these accidents, we introduce Triggerflow, a tool for tracking execution paths that, assisted by source annotations, dynamically analyzes the binary through the debugger. We validate this approach with case studies demonstrating how adopting our method in the development pipeline would have promptly detected such accidents. We further show-case the value of the tooling by presenting two novel discoveries facilitated by Triggerflow: one leak and one defect.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
It is important for the inclusiveness of society that the youth actively participate in its development. Even though the means of digital participation have advanced in the past decade, there is still lack of understanding of digital participation of the youth. In this paper, we present a study on how youth aged 16–25 years perceive social and societal participation and more specifically, how youth currently participate in non-digitally and digitally. We conducted a mixed method study in a large gaming event in Finland using a questionnaire (N = 277) and face-to-face interviews (N = 25). The findings reveal that the gaming youth consider digital participation to include discussions in different social media services or web discussion forums. Creating digital content (e.g. videos) and answering surveys were also emphasized. Perceived advantages to participate digitally include the freedom regarding location and time, ease and efficiency in sharing information, and inexpensiveness. Central disadvantages include lack of commitment, anonymity, misinformation and cheating. We also found that frequently playing gamers are more likely to participate online in social activities than those who play occasionally. Youth who reported that they play strategy games were more active in civic participation than those who do not play strategy games. We discuss the implications of our findings to the design of tools for digital participation.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A password manager stores and handles users' passwords from different services. This relieves the users from constantly remembering and recalling many different login credentials. However, because of the poor usability and limited user experience of password managers, users find it difficult to perform basic actions, such as a safe login. Unavoidably, the password manager holds the login credentials of many online services; as a result, it becomes a desired target for online attacks. This results in compromised security, which users often consider as an inevitable condition that must be accepted. Many studies analysed the usability and security of various password managers. Their research findings, though important, are rather incomprehensible to designers of password managers, because they are limited to particular properties or specific applications and they, often, are contradictory. Hence, we focus on investigating properties and features that can elevate the usability, security, and trustworthiness of password managers, aiming at providing practical, simple, and useful guidelines for building a useable password manager. We performed a systematic literature review, in which we selected thirty-two articles with coherent outcomes associated with usability and security. From these outcomes, we deduced and present meaningful suggestions for realising a useable, secure and trustworthy password manager.
Research output: Contribution to journal › Review Article › Scientific › peer-review
The increasing number of cores in System on Chips (SoC) has introduced challenges in software parallelization. As an answer to this, the dataflow programming model offers a concurrent and reusability promoting approach for describing applications. In this work, a runtime for executing Dataflow Process Networks (DPN) on multicore platforms is proposed. The main difference between this work and existing methods is letting the operating system perform Central processing unit (CPU) load-balancing freely, instead of limiting thread migration between processing cores through CPU affinity. The proposed runtime is benchmarked on desktop and server multicore platforms using five different applications from video coding and telecommunication domains. The results show that the proposed method offers significant improvements over the state-of-art, in terms of performance and reliability.
Research output: Contribution to journal › Article › Scientific › peer-review
Finding graph measures with high discrimination power has been triggered by searching for so-called complete graph invariants. In a series of papers, we have already investigated highly discriminating measures to distinguish graphs (networks) based on their topology. In this paper, we propose an approach where the graph measures are based on the roots of random graph polynomials. The polynomial coefficients have been defined by utilizing information functionals which capture structural information of the underlying networks. Our numerical results obtained by employing exhaustively generated graphs reveal that the new approach outperforms earlier results in the literature.
EXT="Tripathi, Shailesh"
Research output: Contribution to journal › Article › Scientific › peer-review
A total domatic k-partition of a graph is a partition of its vertex set into k subsets such that each intersects the open neighborhood of each vertex. The maximum k for which a total domatic k-partition exists is known as the total domatic number of a graph G, denoted by dt(G). We extend considerably the known hardness results by showing it is[Formula presented]-complete to decide whether dt(G)≥3 where G is a bipartite planar graph of bounded maximum degree. Similarly, for every k≥3, it is[Formula presented]-complete to decide whether dt(G)≥k, where G is split or k-regular. In particular, these results complement recent combinatorial results regarding dt(G) on some of these graph classes by showing that the known results are, in a sense, best possible. Finally, for general n-vertex graphs, we show the problem is solvable in 2nnO(1) time, and derive even faster algorithms for special graph classes.
Research output: Contribution to journal › Article › Scientific › peer-review
The proceedings contain 64 papers. The special focus in this conference is on Next Generation Teletraffic and Wired/Wireless Advanced Networks and Systems. The topics include: Measuring a LoRa Network: Performance, Possibilities and Limitations; testbed for Identify IoT-Devices Based on Digital Object Architecture; the Application of Graph Theory and Adjacency Lists to Create Parallel Queries to Relational Databases; on the Necessary Accuracy of Representation of Optimal Signals; On LDPC Code Based Massive Random-Access Scheme for the Gaussian Multiple Access Channel; application of Optimal Finite-Length Signals for Overcoming “Nyquist Limit”; Influence of Amplitude Limitation for Random Sequence of Single-Frequency Optimal FTN Signals on the Occupied Frequency Bandwidth and BER Performance; Spectral Efficiency Comparison Between FTN Signaling and Optimal PR Signaling for Low Complexity Detection Algorithm; a Method of Simultaneous Signals Spectrum Analysis for Instantaneous Frequency Measurement Receiver; context-Based Cyclist Intelligent Support: An Approach to e-Bike Control Based on Smartphone Sensors; Analytical Models for Schedule-Based License Assisted Access (LAA) LTE Systems; kinetic Approach to Elasticity Analysis of D2D Links Quality Indicators Under Non-stationary Random Walk Mobility Model; the Phenomenon of Secondary Flow Explosion in Retrial Priority Queueing System with Randomized Push-Out Mechanism; Comparison of LBOC and RBOC Mechanisms for SIP Server Overload Control; Performance Analysis of Cognitive Femtocell Network with Ambient RF Energy Harvesting; comparative Analysis of the Mechanisms for Energy Efficiency Improving in Cloud Computing Systems; blue Team Communication and Reporting for Enhancing Situational Awareness from White Team Perspective in Cyber Security Exercises; signing Documents by Hand: Model for Multi-Factor Authentication.
EXT="Balandin, Sergey"
Research output: Book/Report › Anthology › Scientific › peer-review
Utilization of Unmanned Aerial Vehicles (UAVs), also known as “drones”, has a great potential for many emerging applications, such as delivering the connectivity on-demand, providing services for public safety, or recovering after damage to the communication infrastructure. Notably, nearly any application of drones requires a stable link to the ground control center, yet this functionality is commonly added at the last moment in the design, necessitating compact antenna designs. In this work, we propose a novel electrically small antenna element based on the 3D folded loop topology, which could be easily located inside the UAV airframe, yet still delivering good isolation from the drones own noise sources. The complete manufacturing technique along with corresponding simulations/measurements are presented. Measurements and evaluations show that the proposed antenna design is an option to achieve genuinely isotropic radiation in a small size without sacrificing efficiency.
jufiod=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this work, we briefly outline the core 5G air interface improvements introduced by the latest New Radio (NR) specifications, as well as elaborate on the unique features of initial access in 5G NR with a particular emphasis on millimeter-wave (mmWave) frequency range. The highly directional nature of 5G mmWave cellular systems poses a variety of fundamental differences and research problem formulations, and a holistic understanding of the key system design principles behind the 5G NR is essential. Here, we condense the relevant information collected from a wide diversity of 5G NR standardization documents (based on 3GPP Release 15) to distill the essentials of directional access in 5G mmWave cellular, which becomes the foundation for any corresponding system-level analysis.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The lack of realistic and open benchmarking datasets for pedestrian visual-inertial odometry has made it hard to pinpoint differences in published methods. Existing datasets either lack a full six degree-of-freedom ground-truth or are limited to small spaces with optical tracking systems. We take advantage of advances in pure inertial navigation, and develop a set of versatile and challenging real-world computer vision benchmark sets for visual-inertial odometry. For this purpose, we have built a test rig equipped with an iPhone, a Google Pixel Android phone, and a Google Tango device. We provide a wide range of raw sensor data that is accessible on almost any modern-day smartphone together with a high-quality ground-truth track. We also compare resulting visual-inertial tracks from Google Tango, ARCore, and Apple ARKit with two recent methods published in academic forums. The data sets cover both indoor and outdoor cases, with stairs, escalators, elevators, office environments, a shopping mall, and metro station.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
One of the most well-known standards for radio frequency identification (RFID), the standard ISO 18000-6C, collects the requirements for RFID readers and tags and regulates respective communication protocols. In particular, the standard introduces the so-called Q-algorithm resolving conflicts in the channel (which occur when several RFID tags respond simultaneously). As of today, a vast amount of existing literature addresses various modifications of the Q-algorithm; however, none of them is known to significantly reduce the average identification time (i.e., the time to identify all proximate tags). In this work, we derive a lower bound for the average identification time in an RFID system. Furthermore, we demonstrate that in case of an error-free channel, the performance of the legacy Q-algorithm is reasonably close to the proposed lower bound; however, for the error-prone environment, this gap may substantially increase, thereby indicating the need for new identification algorithms.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, we consider incompletely defined discrete functions, i.e., Boolean and multiple-valued functions, f : S → {0, 1, . . . , q - 1} where S ⊆ {0, 1, . . . , q - 1}n i.e., the function value is specified only on a certain subset S of the domain of the corresponding completely defined function. We assume the function to be sparse i.e. |S| is 'small' relative to the cardinality of the domain. We show that by embedding the domain {0, 1, . . . , q - 1}n , where n is the number of variables and q is a prime power, in a suitable ring structure, the multiplicative structure of the ring can be used to construct a linear function {0, 1, . . . , q - 1}n → {0, 1, . . . , q - 1}m that is injective on S provided that m > 2 logq |S| + logq (n - 1). In this way we find a linear transform that reduces the number of variables from n to m, and can be used e.g. in implementation of an incompletely defined discrete function by using linear decomposition.
EXT="Stanković, Radomir"
Research output: Contribution to journal › Article › Scientific › peer-review
The scarcity of resources available for commercial wireless access systems below 6 GHz coupled with constantly increasing traffic demands from the mobile users force network operators to seek additional spectrum. In addition to moving upper in the frequency band and occupying millimeter wave band with 3GPP New Radio access technology the set of solutions also includes implementing commercial LTE systems in unlicensed bands including 2.4 GHz and 5.1 GHz that are currently occupied by Wi-Fi. This technology, known as License Assisted Access (LAA), has recently received considerable attention within the 3GPP community. One of the solutions to provide fair division of air interface resources between competing technologies is to use schedule-based access, where LAA access point is in full control of shared medium and may dynamically schedule allocations to LTE and Wi-Fi traffic. The fine tuning of LAA technology requires careful understanding of various trade-offs and dependencies involved in Wi-Fi and LTE coexistence. In this paper, using the tools of the queuing theory we formulate and solve several analytical models targeting different implementation strategies of schedule-based LAA systems and traffic types of end users. We derive relevant performance characteristics including the session drop probabilities, probability that the session accepted to the system is drop before its service completion and average resource utilization of the system.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, we provide a shooting and bouncing ray (SBR) based simulation study of mmWave radio propagation at 60 GHz in a typical conference room. The room geometry, material types, and other simulation settings are verified against the results of the measurement campaign at 83 GHz in [15]. Here, we extend the evaluation scenario by randomly scattering several human-sized blockers as well as study the effects of human body blockage models. We demonstrate that multiple knife-edge diffraction (KED) models are capable of providing meaningful results while keeping the simulation duration relatively short. Moreover, we address another important scenario, where transmitters and receivers are located at the same heights and are moving according to a predefined trajectory that corresponds, for example, to device-to-device interactions or inter-user interference.
INT=elt, "Semkin, Vasilii"
INT=elt, "Ponomarenko-Timofeev, Aleksei"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A path in an edge-colored graph G is rainbow if no two edges of it are colored the same. The graph G is rainbowconnected if there is a rainbow path between every pair of vertices. If there is a rainbow shortest path between every pair of vertices, the graph G is strongly rainbow-connected. The minimum number of colors needed to make G rainbow-connected is known as the rainbow connection number of G, and is denoted by rc(G). Similarly, the minimum number of colors needed to make G strongly rainbow-connected is known as the strong rainbow connection number ofG, and is denoted by src(G). We prove that for every k ≥ 3, deciding whether src(G) ≤ k is NP-complete for split graphs, which form a subclass of chordal graphs. Furthermore, there exists no polynomial-time algorithm for approximating the strong rainbow connection number of an n-vertex split graph with a factor of n1-2-ϵ for any ϵ > 0 unless P = NP. We then turn our attention to block graphs, which also form a subclass of chordal graphs. We determine the strong rainbow connection number of block graphs, and show it can be computed in linear time. Finally, we provide a polynomial-time characterization of bridgeless block graphs with rainbow connection number at most 4.
Research output: Contribution to journal › Article › Scientific › peer-review
360-degree video has attracted more and more attention in recent years. However, it is a highly challenging task to transmit the high-resolution video within the limited bandwidth. In this paper, we first propose to unequally compress the cubemaps in each frame of the 360-degree video to reduce the total bitrate of the transmitted data. Specifically, a Group of Pictures (GOP) is used as a unit to alternately transmit different versions of the video. Each version consists of 3 high-quality cubemaps and 3 low-quality cubemaps. Then, the convolutional neural network (CNN) is introduced to enhance the low-quality cubemaps with the high-quality cubemaps by exploring the inter-frame similarities. It is shown in the experiment that a single CNN model can be used for various videos. The experimental results also show that the proposed method has an excellent quality enhancement compared with the benchmark in terms of PSNR, especially for videos with slow motion.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
jufoid=62555
Research output: Book/Report › Anthology › Scientific › peer-review
Differential operators are usually used to determine the rate of change and the direction of change of a signal modeled by a function in some appropriately selected function space. Gibbs derivatives are introduced as operators permitting differentiation of piecewise constant functions. Being initially intended for applications in Walsh dyadic analysis, they are defined as operators having Walsh functions as eigenfunctions. This feature was used in different generalizations and extensions of the concept firstly defined for functions on finite dyadic groups. In this paper, we provide a brief overview of the evolution of this concept into a particlar class of differential operators for functions on various groups.
EXT="Stankovic, Radomir S."
jufoid=79748
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Due to the networked nature of modern industrial business, repeated information exchange activities are necessary. Unfortunately, information exchange is both laborious and expensive with the current communication media, which causes errors and delays. To increase the efficiency of communication, this study introduces an architecture to exchange information in a digitally processable manner in industrial ecosystems. The architecture builds upon commonly agreed business practices and data formats, and an open consortium and information mediators enable it. Following the architecture, a functional prototype has been implemented for a real industrial scenario. This study has its focus on the technical information of equipment, but the architecture concept can also be applied in financing and logistics. Therefore, the concept has potential to completely reform industrial communication.
Research output: Contribution to journal › Article › Scientific › peer-review
As IoT devices become more powerful they can also become full participants of Internet architectures. For example, they can consume and provide RESTful services. However, the typical network infrastructures do not support the architecture and middleware solutions used in the cloud-based Internet. We show how systems designed with RESTful architecture can be implemented by using an IoT-specific technology called MQTT. Our example case is an application development and deployment system that can be used for remote management of IoT devices.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In device-to-device communications, the link quality indicators, such as signal-to-interference ratio (SIR) is heavily affected by mobility of users. Conventionally, the mobility model is assumed to be stationary. In this paper, we use kinetic theory to analyze evolution of probability distribution function parameters of SIR in D2D environment under non-stationary mobility of users. Particularly, we concentrate on elasticity of the SIR moments with respect to parameters of Fokker-Planck equation. The elasticity matrix for average SIR value, SIR variance and time periods, when SIR values is higher than a certain threshold are numerically constructed. Our numerical results demonstrate that the main kinetic parameter affecting SIR behavior is diffusion coefficient. The influence of the drift is approximately ten times less.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Most consumers own more than one device for accessing content from the Web. In this world Liquid Software allows users to switch the device and effortlessly continue tasks in the new device. This paper addresses on the needs and methods for transferring a user session and user information from one device to another. The identity should follow the moving application seamlessly instead of requiring repeated entering of credentials in each device. Such solution would make services that require authentication to work in a liquid fashion. The paper describes our on-going work on investigating how liquid transfer of user identity can be added to various ways of handing the user authentication.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In the field of cryptography engineering, implementation-based attacks are a major concern due to their proven feasibility. Fault injection is one attack vector, nowadays a major research line. In this paper, we present how a memory tampering-based fault attack can be used to severely limit the output space of binary GCD based modular inversion algorithm implementations. We frame the proposed attack in the context of ECDSA showing how this approach allows recovering the private key from only one signature, independent of the key size. We analyze two memory tampering proposals, illustrating how this technique can be adapted to different implementations. Besides its application to ECDSA, it can be extended to other cryptographic schemes and countermeasures where binary GCD based modular inversion algorithms are employed. In addition, we describe how memory tampering-based fault attacks can be used to mount a previously proposed fault attack on scenarios that were initially discarded, showing the importance of including memory tampering attacks in the frameworks for analyzing fault attacks and their countermeasures.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper presents a model-based design method and a corresponding new software tool, the HTGS Model-Based Engine (HMBE), for designing and implementing dataflow-based signal processing applications on multi-core architectures. HMBE provides complementary capabilities to HTGS (Hybrid Task Graph Scheduler), a recently-introduced software tool for implementing scalable workflows for high performance computing applications on compute nodes with high core counts and multiple GPUs. HMBE integrates model-based design approaches, founded on dataflow principles, with advanced design optimization techniques provided in HTGS. This integration contributes to (a) making the application of HTGS more systematic and less time consuming, (b) incorporating additional dataflow-based optimization capabilities with HTGS optimizations, and (c) automating significant parts of the HTGS-based design process using a principled approach. In this paper, we present HMBE with an emphasis on the model-based design approaches and the novel dynamic scheduling techniques that are developed as part of the tool. We demonstrate the utility of HMBE via two case studies: an image stitching application for large microscopy images and a background subtraction application for multispectral video streams.
Research output: Contribution to journal › Article › Scientific › peer-review
Dataflow is widely used as a model of computation in many application domains, especially domains within the broad area of signal and information processing. The most common uses of dataflow techniques in these domains are in the modeling of application behavior and the design of specialized architectures. In this chapter, we discuss a different use of dataflow that involves its application as a formal model for scheduling applications onto architectures. Scheduling is a critical aspect of dataflow-based system design that impacts key metrics, including latency, throughput, buffer memory requirements, and energy efficiency. Deriving efficient and reliable schedules is an important and challenging problem that must be addressed in dataflow-based design flows. The concepts and methods reviewed in this chapter help to address this problem through model-based representations of schedules. These representations build on the separation of concerns between functional specification and scheduling in dataflow, and provide a useful new class of abstractions for designing dataflow graph schedules, as well as for managing, analyzing, and manipulating schedules within design tools.
jufo=53801
Research output: Chapter in Book/Report/Conference proceeding › Chapter › Scientific › peer-review
Recent developments in live-cell time-lapse microscopy and signal processing methods for single-cell, single-RNA detection now allow characterizing the in vivo dynamics of RNA production of Escherichia coli promoters at the single event level. This dynamics is mostly controlled at the promoter region, which can be engineered with single nucleotide precision. Based on these developments, we propose a new strategy to engineer genes with predefined transcription dynamics (mean and standard deviation of the distribution of RNA numbers of a cell population). For this, we use stochastic modelling followed by genetic engineering, to design synthetic promoters whose rate-limiting steps kinetics allow achieving a desired RNA production kinetics. We present an example where, from a pre-defined kinetics, a stochastic model is first designed, from which a promoter is selected based on its rate-limiting steps kinetics. Next, we engineer mutant promoters and select the one that best fits the intended distribution of RNA numbers in a cell population. As the modelling strategies and databases of models, genetic constructs, and information on these constructs kinetics improve, we expect our strategy to be able to accommodate a wide variety of pre-defined RNA production kinetics.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
jufoid=62555
Research output: Book/Report › Anthology › Scientific › peer-review
jufoid=62555
Research output: Book/Report › Anthology › Scientific › peer-review
Numerous quantitative graph measures have been defined and applied in various disciplines. Such measures may be differentiated according to whether they are information-theoretic or non-information-theoretic. In this paper, we examine an important property of Randić entropy, an information-theoretic measure, and examine some related graph measures based on random roots. In particular, we investigate the degeneracy of these structural graph measures and discuss numerical results. Finally, we draw some conclusions about the measures’ applicability to deterministic and non-deterministic networks.
EXT="Tripathi, Shailesh"
Research output: Contribution to journal › Article › Scientific › peer-review
Nowadays, Internet of thing including network support (i.e. checking social media, sending emails, video conferencing) requires smart and efficient data centers to support these services. Hence, data centers become more important and must be able to respond to ever changing service requirements and application demands. However, data centers are classified as one of the largest consumers of energy in the world. Existing topologies such as ScalNet improves the data center scalability while leading to enormous amounts of energy consumption. In this paper, we present a new energy efficient algorithm for ScalNet called Green ScalNet. The proposed topology strikes a compromise between maximizing the energy saving and minimizing the average path length. By taking into consideration the importance of the transmitted data and the critical parameters for the receiver (e.g. time, energy), the proposed topology dynamically controls the number of active communication links by turning off and on ports in the network (switches ports and nodes ports). Both theoretical analysis and simulation experiments are conducted to evaluate its overall performance in terms of average path length and energy consumption.
EXT="Hamila, Ridha"
EXT="Kiranyaz, Serkan"
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The acceleration of mobile data traffic and the shortage of available spectral resources create new challenges for the next-generation (5G) networks. One of the potential solutions is network offloading that opens a possibility for unlicensed spectrum utilization. Heterogeneous networking between cellular and WLAN systems allows mobile users to adaptively utilize the licensed (LTE) and unlicensed (IEEE 802.11) radio technologies simultaneously. At the same time, softwarized frameworks can be employed not only inside the network controllers but also at the end nodes. To operate with the corresponding policies and interpret them efficiently, a signaling processor has to be developed and equipped with a fast packet parsing mechanism. In this scenario, the reaction time becomes a crucial factor, and this paper provides an overview of the existing parsing libraries (Scapy and dpkt) as well as proposes a flexible parsing tool that is capable of reducing the latency incurred by analyzing packets in a softwarized network.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A political, economic, socio-cultural, technological, environment and legal (PESTEL) analysis is a framework or tool used to analyse and monitor the macro-environmental factors that have an impact on an organisation. The results identify threats and weaknesses which are used in a strengths, weaknesses, opportunities and threats (SWOT) analysis. In this paper the PESTEL framework was utilized to categorize hacktivism motivations for attack campaigns against certain companies, governments or industries. Our study is based on empirical evidence: of thirty-three hacktivism attack campaigns in manifesto level. Then, the targets of these campaigns were analysed and studied accordingly. As a result, we claim that connecting cyberattacks to motivations permits organizations to determine their external cyberattack risks, allowing them to perform more accurate risk-modeling.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
EXT="Balandin, Sergey"
Research output: Contribution to journal › Editorial › Scientific
In many applications in communication, data retrieval and processing, digital system design, and related areas, incompletely specified switching (Boolean or multiple-valued) functions are encountered. A particular class of highly incompletely specified functions are the so-called index generation functions, which being defined on a small fraction of input combinations, often do not require all the variables to be represented. Reducing the variables of index generation functions is an important task, since they are used mainly in real-time applications and compactness of their representations influences performances of related systems. One approach towards reducing the number of variables in index generation functions are linear transformations meaning that initial variables are replaced by their linear combinations. A drawback is that finding an optimal transformation can be difficult. Therefore, in this paper, we first formulate the problem of finding a good linear transformation by using linear subspaces. This formulation serves as a basis to propose non-linear (polynomial) transformations to reduce the number of variables in index generation functions.
EXT="Stanković, Radomir"
Research output: Contribution to journal › Article › Scientific › peer-review
Big data is said to provide many benefits. However, as data originates from multiple sources with different quality, big data is not easy to use. Representational quality refers to the concise and consistent representation of data to allow ease of understanding of the data and interpretability. In this paper, we investigate the challenges in creating representational quality of big data. Two case studies are investigated to understand the challenges emerging from big data. Our findings suggest that the veracity and velocity of big data makes interpretation more difficult. Our findings also suggest that decisions are made ad-hoc and decision-makers often are not able to understand the ins and outs. Sense-making is one of the main challenges in big data. Taking a naturalistic decision-making view can be used to understand the challenges of big data processing, interpretation and use in decision-making better. We recommend that big data research should focus more on easy interpretation of the data.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Asymmetric video coding is a well-studied area for bit rate reduction in stereoscopic video coding. Such video coding technique is possible because of the binocular fusion theory which states that the Human Visual System (HVS) is capable of fusing views from both the eyes. As a result, past literature has shown that the final perceived quality of different left and right quality images is closer the highest quality of the two views. In this paper, we investigate spatially asymmetric omnidirectional video in subjective experiments using a Head Mounted Display (HMD). We want to subjectively verify to what extent the binocular fusion theory applies in immersive media environments, and also assess to what degree reducing the omnidirectional video streaming bandwidth is feasible. We prove that (1) the HVS is capable of partial suppression of the low-quality view up to a certain resolution; (2) there is a bandwidth saving of 25% when 75% of the spatial resolution is used for one of the views, while ensuring a subjective visual quality with a DMOS of 4.7 points; (3) in case of bandwidth adaptation using asymmetric video, bit rate savings are in the range 25–50%.
EXT="Curcio, Igor D.D."
EXT="Zare, Alireza"
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, we describe some highlights of the new branch QUANTITATIVE GRAPH THEORY and explain its significant different features compared to classical graph theory. The main goal of quantitative graph theory is the structural quantification of information contained in complex networks by employing a measurement approach based on numerical invariants and comparisons. Furthermore, the methods as well as the networks do not need to be deterministic but can be statistic. As such this complements the field of classical graph theory, which is descriptive and deterministic in nature. We provide examples of how quantitative graph theory can be used for novel applications in the context of the overarching concept network science.
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper, we examine the zeros of permanental polynomials as highly unique network descriptors. We employ exhaustively generated networks and demonstrate that our defined graph measures based on the moduli of the zeros of permanental polynomials are quite efficient when distinguishing graphs structurally. In this work, we continue with a line of research that relates to the search of almost complete graph invariants. These highly unique network measures may serve as a powerful tool for tackling graph isomorphism.
Research output: Contribution to journal › Article › Scientific › peer-review
Research output: Contribution to journal › Editorial › Scientific
This paper presents an integrated self-aware computing model mitigating the power dissipation of a heterogeneous reconfigurable multicore architecture by dynamically scaling the operating frequency of each core. The power mitigation is achieved by equalizing the performance of all the cores for an uninterrupted exchange of data. The multicore platform consists of heterogeneous Coarse-Grained Reconfigurable Arrays (CGRAs) of application-specific sizes and a Reduced Instruction-Set Computing (RISC) core. The CGRAs and the RISC core are integrated with each other over a Network-on-Chip (NoC) of six nodes arranged in a topology of two rows and three columns. The RISC core constantly monitors and controls the performance of each CGRA accelerator by adjusting the operating frequencies unless the performance of all the CGRAs is optimally balanced over the platform. The CGRA cores on the platform are processing some of the most computationally-intensive signal processing algorithms while the RISC core establishes packet based synchronization between the cores for computation and communication. All the cores can access each other’s computational and memory resources while processing the kernels simultaneously and independently of each other. Besides general-purpose processing and overall platform supervision, the RISC processor manages performance equalization among all the cores which mitigates the overall dynamic power dissipation by 20.7 % for a proof-of-concept test.
Research output: Contribution to journal › Article › Scientific › peer-review
Passive stereoscopic displays create the illusion of three dimensions by employing orthogonal polarizing filters and projecting two images onto the same screen. In this article, a coding scheme targeting depth-enhanced stereoscopic video coding for polarized displays is introduced. We propose to use asymmetric row-interleaved sampling for texture and depth views prior to encoding. The performance of the proposed scheme is compared with several other schemes, and the objective results confirm the superior performance of the proposed method. Furthermore, subjective evaluation proves that no quality degradation is introduced by the proposed coding scheme compared to the reference method.
Research output: Contribution to journal › Article › Scientific › peer-review
Partial order reduction covers a range of techniques based on eliminating unnecessary transitions when generating a state space. On the other hand, abstractions replace sets of states of a system with abstract representatives in order to create a smaller state space. This article explores how stubborn sets and abstraction can be combined. We provide examples to provide intuition and expand on some recent results. We provide a classification of abstractions and give some novel results on what is needed to combine abstraction and partial order reduction in a sound way.
Research output: Chapter in Book/Report/Conference proceeding › Chapter › Scientific
Six stubborn set methods for computing reduced labelled transition systems are presented. Two of them preserve the traces, and one is tailored for on-the-fly verification of safety properties. The rest preserve the tree failures, fair testing equivalence, or the divergence traces. Two methods are entirely new, the ideas of three are recent and the adaptation to the process-algebraic setting with non-deterministic actions is new, and one is recent but slightly generalized. Most of the methods address problems in earlier solutions to the so-called ignoring problem. The correctness of each method is proven, and efficient implementation is discussed.
Research output: Chapter in Book/Report/Conference proceeding › Chapter › Scientific
To enhance the desirability of public transportation, it is important to design for positive travel experience. The context of bus transportation has broad potential for utilization of novel, supplementary digital services beyond travel information. The aim of our research was to study bus passengers’ needs and expectations for future digital services and to develop initial service concept ideas through co-design. To this end, three Idea generating workshops with 24 participants were arranged. Our findings reveal six service themes that can be used as a basis of designing future digital traveling services: (1) Information at a glance while traveling, (2) Entertainment and entertaining activities, (3) Services that support social interaction, (4) Multiple channels to provide travel information, (5) Extra services for better travel experience, and (6) Services that people already expect to have. The themes are discussed and further elaborated in this paper.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3x and 1.8x speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.
Research output: Contribution to journal › Article › Scientific › peer-review
Automatic Speech Recognition (ASR) field has improved substantially in the last years. We are in a point never saw before, where we can apply such algorithms in non-ideal conditions such as real classrooms. In these scenarios it is still not possible to reach perfect recognition rates, however we can already take advantage of these improvements. This paper shows preliminary results using ASR in Chilean and Finnish middle and high school to automatically provide teachers a visualization of the structure of concepts present in their discourse in science classrooms. These visualizations are conceptual networks that relate key concepts used by the teacher. This is an interesting tool that gives feedback to the teacher about his/her pedagogical practice in classes. The result of initial comparisons shows great similarity between conceptual networks generated in a manual way with those generated automatically.
jufoid=62555
EXT="Mansikkaniemi, André "
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The current globalization already faces the challenge of meeting the continuously growing demand for new consumer goods by simultaneously ensuring a sustainable evolution of human existence. The industrial value creation must be geared towards sustainability. In order to overcome this challenge, tightly coupling the production and its axiomatization processes is required in the paradigm of Industry 4.0. This technology bridges together a vast amount of new interconnected smart devices being mostly battery powered. Batteries are the heart of industrial motive power and electric energy storing solutions in the infrastructures of today. The charges related to the batteries are among the biggest cost (2.000–5.000 EUR per unit). Unfortunately, the batteries are not always treated properly and the badly managed ones lose their ability to store energy quickly. In this work, we present the developed modular Cloud solution utilizing Solution as a Service (SaaS) to monitor and manage industrial power unit systems. Modular approach is realized using simple miniature non-intrusive wireless sensors combined with cloud platform that provides the battery intelligence.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Attempting to build a uniform theory of mobility-dependent characterization of wireless communications systems, in this paper, we address time-dependent analysis of the signal-to-interference ratio (SIR) in device-to-device (D2D) communications scenario. We first introduce a general kinetic-based mobility model capable of representing the movement process of users with a wide range of mobility characteristics including conventional, fractal and even non-stationary ones. We then derive the time-dependent evolution of mean, variance and coefficient of variation of SIR metric. We demonstrate that under non-stationary mobility behavior of communicating entities the SIR may surprisingly exhibit stationary behavior.
INT=elt,"Samouylov, Andrey"
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Full use of the parallel computation capabilities of present and expected CPUs and GPUs requires use of vector extensions. Yet many actors in data flow systems for digital signal processing have internal state (or, equivalently, an edge that loops from the actor back to itself) that impose serial dependencies between actor invocations that make vectorizing across actor invocations impossible. Ideally, issues of inter-thread coordination required by serial data dependencies should be handled by code written by parallel programming experts that is separate from code specifying signal processing operations. The purpose of this paper is to present one approach for so doing in the case of actors that maintain state. We propose a methodology for using the parallel scan (also known as prefix sum) pattern to create algorithms for multiple simultaneous invocations of such an actor that results in vectorizable code. Two examples of applying this methodology are given: (1) infinite impulse response filters and (2) finite state machines. The correctness and performance of the resulting IIR filters and one class of FSMs are studied.
Research output: Contribution to journal › Article › Scientific › peer-review
Dataflow programming has received increasing attention in the age of multicore and heterogeneous computing. Modular and concurrent dataflow program descriptions enable highly automated approaches for design space exploration, optimization and deployment of applications. A great advance in dataflow programming has been the recent introduction of the RVC-CAL language. Having been standardized by the ISO, the RVC-CAL dataflow language provides a solid basis for the development of tools, design methodologies and design flows. This paper proposes a novel design flow for mapping RVC-CAL dataflow programs to parallel and heterogeneous execution platforms. Through the proposed design flow the programmer can describe an application in the RVC-CAL language and map it to multi- and many-core platforms, as well as GPUs, for efficient execution. The functionality and efficiency of the proposed approach is demonstrated by a parallel implementation of a video processing application and a run-time reconfigurable filter for telecommunications. Experiments are performed on GPU and multicore platforms with up to 16 cores, and the results show that for high-performance applications the proposed design flow provides up to 4 × higher throughput than the state-of-the-art approach in multicore execution of RVC-CAL programs.
Research output: Contribution to journal › Article › Scientific › peer-review
Anganwadi workers [1] form the core of healthcare system for a large section of rural and semi-urban population in India. They provide care for newborn babies and play an important role in immunization programs, besides providing health related information to pregnant women. Traditionally these Anganwadi workers use paper based information leaflets as a part of their job to spread awareness among the people. Although mobile phones have made their inroads into the day to day life of these workers for basic communication (making a call), however it is yet to be seen how a mobile device is being used as a technological aid for their work. There are enormous challenges in addressing these issues especially in developing regions owing to numerous reasons such as illiteracy, cognitive difficulties, cultural norms, collaborations, experience and exposure, motivation, power relations, and social standing [2]. The purpose of this field visit would be to enquire the role of mobile devices in their day-to-day work; and if being used as a technological intervention, then in what manner and form is it being used? The methodology used to conduct the study would involve contextual enquiry, open-ended interviews and observing the Anganwadi workers using ICT solutions and other informational artefacts.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Efficient sample rate conversion is of widespread importance in modern communication and signal processing systems. Although many efficient kinds of polyphase filterbank structures exist for this purpose, they are mainly geared toward serial, custom, dedicated hardware implementation for a single task. There is, therefore, a need for more flexible sample rate conversion systems that are resource-efficient, and provide high performance. To address these challenges, we present in this paper an all-software-based, fully parallel, multirate resampling method based on graphics processing units (GPUs). The proposed approach is well-suited for wireless communication systems that have simultaneous requirements on high throughput and low latency. Utilizing the multidimensional architecture of GPUs, our design allows efficient parallel processing across multiple channels and frequency bands at baseband. The resulting architecture provides flexible sample rate conversion that is designed to address modern communication requirements, including real-time processing of multiple carriers simultaneously.
Research output: Contribution to journal › Article › Scientific › peer-review
jufoid=62555
EXT="Balandin, Sergey"
Research output: Book/Report › Anthology › Scientific › peer-review
In cryo-electron microscopy (cryo-EM), the Wiener filter is the optimal operation – in the least-squares sense – of merging a set of aligned low signal-to-noise ratio (SNR) micrographs to obtain a class average image with higher SNR. However, the condition for the optimal behavior of the Wiener filter is that the signal of interest shows stationary characteristic thoroughly, which cannot always be satisfied. In this paper, we propose substituting the conventional Wiener filter, which encompasses the whole image for denoising, with its local adaptive implementation, which denoises the signal locally. We compare our proposed local adaptive Wiener filter (LA-Wiener filter) with the conventional class averaging method using a simulated dataset and an experimental cryo-EM dataset. The visual and numerical analyses of the results indicate that LA-Wiener filter is superior to the conventional approach in single particle reconstruction (SPR) applications.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In recent years, deep learning has become one of the most representative and effective techniques in face recognition. Due to the high expense of labelling data, it is costly to collect a large-scale face dataset with accurate label information. For the tasks without sufficient data, deep models cannot be well trained. Generally, parameters of deep models are usually initialized with a pre-trained model, and then fine-tuned on a small dataset of specific task. However, by straightforward fine-tuning, the final model usually does not generalize well. In this paper, we propose a multi-task deep learning (MTDL) method for face recognition. The superiority of the proposed multi-task method is demonstrated by experiments on LFW and CCFD.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A total domatic k-partition of a graph is a partition of its vertex set into k subsets such that each intersects the open neighborhood of each vertex. The maximum k for which a total domatic k-partition exists is known as the total domatic number of a graph G, denoted by d:t(G). We extend considerably the known hardness results by showing it istextsc {NP} -complete to decide whether d:t(G)ge 3 where G is a bipartite planar graph of bounded maximum degree. Similarly, for every kge 3, it istextsc {NP} -complete to decide whether d:t(G)ge k, where G is a split graph or k-regular. In particular, these results complement recent combinatorial results regarding d:t(G) on some of these graph classes by showing that the known results are, in a sense, best possible. Finally, for general n-vertex graphs, we show the problem is solvable in 2^n n^{O(1)} time, and derive even faster algorithms for special graph classes.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Digital predistortion (DPD) is a widely adopted baseband processing technique in current radio transmitters. While DPD can effectively suppress unwanted spurious spectrum emissions stemming from imperfections of analog RF and baseband electronics, it also introduces extra processing complexity and poses challenges on efficient and flexible implementations, especially for mobile cellular transmitters, considering their limited computing power compared to basestations. In this paper, we present high data rate implementations of broadband DPD on modern embedded processors, such as mobile GPU and multicore CPU, by taking advantage of emerging parallel computing techniques for exploiting their computing resources. We further verify the suppression effect of DPD experimentally on real radio hardware platforms. Performance evaluation results of our DPD design demonstrate the high efficacy of modern general purpose mobile processors on accelerating DPD processing for a mobile transmitter.
Research output: Contribution to journal › Article › Scientific › peer-review
Government and NGO schools catering to children from low-income urban environments are increasingly introducing ir children, through semi-structured interviews. This is an extension of our ongoing work in designing sustainable educational technology models for low-literate urban populations.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper presents a convolutional neural network based approach for estimating the relative pose between two cameras. The proposed network takes RGB images from both cameras as input and directly produces the relative rotation and translation as output. The system is trained in an end-to-end manner utilising transfer learning from a large scale classification dataset. The introduced approach is compared with widely used local feature based methods (SURF, ORB) and the results indicate a clear improvement over the baseline. In addition, a variant of the proposed architecture containing a spatial pyramid pooling (SPP) layer is evaluated and shown to further improve the performance.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In the last few years, rapid development of deep learning method has boosted the performance of face recognition systems. However, face recognition still suffers from a diverse variation of face images, especially for the problem of face identification. The high expense of labelling data makes it hard to get massive face data with accurate identification information. In real-world applications, the collected data are mixed with severe label noise, which significantly degrades the generalization ability of deep learning models. In this paper, to alleviate the impact of the label noise, we propose a robust deep face recognition (RDFR) method by automatic outlier removal. The noisy faces are automatically recognized and removed, which can boost the performance of the learned deep models. Experiments on large-scale face datasets LFW, CCFD, and COX show that RDFR can effectively remove the label noise and improve the face recognition performance.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, we validate and extend previous findings on using emotional design in online learning materials by using a randomized controlled trial in the context of a partially-online university level programming course. For students who did not master the content beforehand, our results echo previous observations: emotional design material was not perceived more favourably, while materials’ perceived quality was correlated with learning outcomes. Emotionally designed material lead to better learning outcomes per unit of time, but it didn’t affect students navigation in the material.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This study focuses on the differences between stubborn sets and other partial order methods. First a major problem with step graphs is pointed out with an example. Then the deadlock-preserving stubborn set method is compared to the deadlock-preserving ample set and persistent set methods. Next, conditions are discussed whose purpose is to ensure that the reduced state space preserves the ordering of visible transitions, that is, transitions that may change the truth values of the propositions that the formula under verification has been built from. Finally solutions to the ignoring problem are analysed both when the purpose is to preserve only safety properties and when also liveness properties are of interest.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Most ample, persistent, and stubborn set methods use some special condition for ensuring that the analysis is not terminated prematurely. In the case of stubborn set methods for safety properties, implementation of the condition is usually based on recognizing the terminal strong components of the reduced state space and, if necessary, expanding the stubborn sets used in their roots. In an earlier study it was pointed out that if the system may execute a cycle consisting of only invisible actions and that cycle is concurrent with the rest of the system in a non-obvious way, then the method may be fooled to construct all states of the full parallel composition. This problem is solved in this study by a method that is based on “freezing” the actions in the cycle.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
SSH client public key authentication method is one of the most used public key client authentication methods. Despite its popularity, the precise protocol is not very well known, and even advanced users may have misconceptions of its functionality. We describe the SSH public key authentication protocol, and identify potential weak points for client privacy. We further review parts of the OpenSSH implementation of the protocol, and identify possible timing attack information leaks. To evaluate the severity of these leaks we built a modified SSH-library that can be used to query the authentication method with arbitary public key blobs and measure the response time. We then use the resulting query timing differences to enumerate valid users and their key types. Furthermore, to advance the knowledge on remote timing attacks, we study the timing signal exploitability over a Tor Hidden Service (HS) connection and present filtering methods that make the attack twice as effective in the HS setting.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Shopping malls are characterized by a high density of users. The use of direct device-to-device (D2D) communications may significantly mitigate the load imposed on the cellular systems in such environments. In addition to high user densities, the communicating entities are inherently mobile with very specific attractor-based mobility patterns. In this paper, we propose a model for characterizing time-dependent signal-to-interference ratio (SIR) in shopping malls. Particularly, we use fractional Fokker-Plank equation for modeling the non-linear functional of the average SIR value, defined on a stochastic fractal trajectory. The evolution equation of the average SIR is derived in terms of fractal motion of the tagged receiver and the interfering devices. We illustrate the use of our model by showing that the behavior of SIR is generally varying for different types of fractals.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Tor offers a censorship-resistant and distributed platform that can provide easy-to-implement anonymity to web users, websites, and other web services. Tor enables web servers to hide their location, and Tor users can connect to these authenticated hidden services while the server and the user both stay anonymous. However, throughout the years of Tor’s existence, some users have lost their anonymity. This paper discusses the technical limitations of anonymity and the operational security challenges that Tor users will encounter. We present a hands-on demonstration of anonymity exposures that leverage traffic correlation attacks, electronic fingerprinting, operational security failures, and remote code execution. Based on published research and our experience with these methods, we will discuss what they are and how some of them can be exploited. Also, open problems, solutions, and future plans are discussed.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
jufoid=62555
EXT="Ometov, Aleksandr"
Research output: Book/Report › Anthology › Scientific › peer-review
The idea of interface diversification is that internal interfaces in the system are transformed into unique secret instances. On one hand, the trusted programs in the system are accordingly modified so that they can use the diversified interfaces. On the other hand, the malicious code injected into a system does not know the diversification secret, that is the language of the diversified system, and thus it is rendered useless. Based on our study of 500 exploits, this paper surveys the different interfaces that are targeted in malware attacks and can potentially be diversified in order to prevent the malware from reaching its goals. In this study, we also explore which of the identified interfaces have already been covered in existing diversification research and which interfaces should be considered in future research. Moreover, we discuss the benefits and drawbacks of diversifying these interfaces. We conclude that diversification of various internal interfaces could prevent or mitigate roughly 80% of the analyzed exploits. Most interfaces we found have already been diversified as proof-of-concept implementations but diversification is not widely used in practical systems.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Over the last decade aural and visual monitoring of massive people gatherings has become a critical problem of national security. Whenever possible a fixed infrastructure is used for this purpose. However, in case of spontaneous gatherings the infrastructure may not be available. In this paper, we propose the system for spontaneous “flash crowd” monitoring in areas with no fixed infrastructure. The basic concept is to engage users with their mobile devices to participate in the monitoring process. The system takes on characteristics of “big data” generators. We analyze the proposed system for coverage metrics and estimate the rate imposed on the wireless network. Our results show that given a certain level of participation the LTE network can support aural monitoring with prescribed guarantees. However, the modern LTE system cannot fully support visual monitoring as much more capacity is required. This capacity may potentially be provided by forthcoming millimeter wave and terahertz communications systems.
INT=elt,"Nguyen, An"
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A lot of research attention has recently been dedicated to multi-agent systems, such as autonomous robots that demonstrate proactive and dynamic problem-solving behavior. Over the recent decades, there has been enormous development in various agent technologies, which enabled efficient provisioning of useful and convenient services across a multitude of fields. In many of these services, it is required that information security is guaranteed reliably. Unless there are certain guarantees, such services might observe significant deployment issues. In this paper, a novel trust management framework for multi-agent systems is developed that focuses on access control and node reputation management. It is further analyzed by utilizing a compromised device attack, which proves its suitability for practical utilization.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The need for efficient resource utilization at the air interfaces in heterogeneous wireless systems has recently led to the concept of downlink and uplink decoupling (DUDe). Several studies have already reported the gains of using DUDe in static traffic conditions. In this paper we investigate performance of DUDe with stochastic session arrivals patterns in LTE environment with macro and micro base stations. Particularly, we use a queuing systems with random resource requirements and to calculate the session blocking probability and throughput of the system. Our results demonstrate that DUDe association approach allows to significantly improve the metrics of interest compared to conventional downlink-based association mechanism.
jufoid=62555
EXT="Kovalchukov, Roman"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
General purpose networks-on-chip (GP-NoC) are expected to feature tens or even hundreds of computational elements with complex communications infrastructure binding them into a connected network to achieve memory synchronization. The experience accumulated over the years in network design suggests that the knowledge of the traffic nature is mandatory for successful design of a networking technology. In this paper, based on the Intel CPU family, we describe traffic estimation techniques for modern multi-core GP-CPUs, discuss the traffic modeling procedure and highlight the implications of the traffic structure for GP-NoC research. The most important observation is that the traffic at internal interfaces appears to be random for external observer and has clearly identifiable batch structure.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Cellular network assistance over unlicensed spectrum technologies is a promising approach to improve the average system throughput and achieve better trade-off between latency and energy-efficiency in Wireless Local Area Networks (WLANs). However, the extent of ultimate user gains under network-assisted WLAN operation has not been explored sufficiently. In this paper, an analytical model for usercentric performance evaluation in such a system is presented. The model captures the throughput, energy efficiency, and access delay assuming aggressive WLAN channel utilization. In the second part of the paper, our formulations are validated with system-level simulations. Finally, the cases of possible unfair spectrum use are also discussed.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Contribution to journal › Editorial › Scientific
Multirate filter banks can be implemented efficiently using fast-convolution (FC) processing. The main advantage of the FC filter banks (FC-FB) compared with the conventional polyphase implementations is their increased flexibility, that is, the number of channels, their bandwidths, and the center frequencies can be independently selected. In this paper, an approach to optimize the FC-FBs is proposed. First, a subband representation of the FC-FB is derived. Then, the optimization problems are formulated with the aid of the subband model. Finally, these problems are conveniently solved with the aid of a general nonlinear optimization algorithm. Several examples are included to demonstrate the proposed overall design scheme as well as to illustrate the efficiency and the flexibility of the resulting FC-FB.
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper we survey methods for performing a comparative graph analysis and explain the history, foundations and differences of such techniques of the last 50 years. While surveying these methods, we introduce a novel classification scheme by distinguishing between methods for deterministic and random graphs. We believe that this scheme is useful for a better understanding of the methods, their challenges and, finally, for applying the methods efficiently in an interdisciplinary setting of data science to solve a particular problem involving comparative network analysis.
Research output: Contribution to journal › Article › Scientific › peer-review
This publication addresses two bottlenecks in the construction of minimal coverability sets of Petri nets: the detection of situations where the marking of a place can be converted to ω, and the manipulation of the set A of maximal ω-markings that have been found so far. For the former, a technique is presented that consumes very little time in addition to what maintaining A consumes. It is based on Tarjan's algorithm for detecting maximal strongly connected components of a directed graph. For the latter, a data structure is introduced that resembles BDDs and Covering Sharing Trees, but has additional heuristics designed for the present use. Results from a few experiments are shown. They demonstrate significant savings in running time and varying savings in memory consumption compared to an earlier state-of-the-art technique.
Research output: Contribution to journal › Article › Scientific › peer-review
Properties of porous materials, abundant both in nature and industry, have broad influences on societies via, e.g. oil recovery, erosion, and propagation of pollutants. The internal structure of many porous materials involves multiple scales which hinders research on the relation between structure and transport properties: typically laboratory experiments cannot distinguish contributions from individual scales while computer simulations cannot capture multiple scales due to limited capabilities. Thus the question arises how large domain sizes can in fact be simulated with modern computers. This question is here addressed using a realistic test case; it is demonstrated that current computing capabilities allow the direct pore-scale simulation of fluid flow in porous materials using system sizes far beyond what has been previously reported. The achieved system sizes allow the closing of some particular scale gaps in, e.g. soil and petroleum rock research. Specifically, a full steady-state fluid flow simulation in a porous material, represented with an unprecedented resolution for the given sample size, is reported: the simulation is executed on a CPU-based supercomputer and the 3D geometry involves 16,3843 lattice cells (around 590 billion of them are pore sites). Using half of this sample in a benchmark simulation on a GPU-based system, a sustained computational performance of 1.77 PFLOPS is observed. These advances expose new opportunities in porous materials research. The implementation techniques here utilized are standard except for the tailored high-performance data layouts as well as the indirect addressing scheme with a low memory overhead and the truly asynchronous data communication scheme in the case of CPU and GPU code versions, respectively.
INT=fys,"Mattila, Keijo"
Research output: Contribution to journal › Article › Scientific › peer-review
Software developers use software products to design and develop new software products for others to use. Research has introduced a concept of developer experience inspired by the concept of user experience but appreciating also the special characteristics of software development context. It is unclear what the experiential components of developer experience are and how it can be measured. In this paper we address developer experience of Vaadin Designer, a graphical user interface designer tool in terms of user experience, intrinsic motivation, and flow state experience. We surveyed 18 developers using AttrakDiff, flow state scale, intrinsic motivation inventory and our own DEXI scale and compare those responses to developers’ overall user experience assessment using Mann-Whitney U test. We found significant differences in motivational and flow state factors between groups who assessed the overall user experience either bad or good. Based on our results we discuss the factors that construe developer experience.
JUFOID=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Today, museums are looking for new ways to attract and engage audience. These include virtual exhibitions, augmented reality and 3D modelling based applications, and interactive digital storytelling. The target of all these activities is to provide better experiences for audiences that are very familiar with the digital world. In augmented reality (AR) and interactive digital storytelling (IDS) systems, visual presentation has been dominant. In contrast to this trend, we have chosen to concentrate on auditory presentation. A key element for this is a backend service supporting different client applications. This paper discusses our experiences from designing a portable open source based audio digital asset management system (ADAM), which supports interaction with smart phones and tablets containing audio augmented reality and audio story applications. We have successfully implemented ADAM system and evaluated it in the Museum of Technology in Helsinki, Finland.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Enterprise Architecture (EA) has been employed in the public sector to improve efficiency and interoperability of information systems. Despite their daily use in the public sector, the concepts of Enterprise Architecture and efficiency are ambiguous and lack commonly accepted definitions. The benefits and outcomes of using EA in the public sector have been studied with mixed results. This study examined the use of EA in the Finnish basic education system using critical discourse analysis (CDA). The research revealed how the role and rationale of EA is constructed in the speech of public sector officials. Three orders of discourse, each having its own views on EA, were found. While there were commonly accepted functions for EA, there were also areas where the concepts were not mutually understood or accepted.
JUFOID=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
More and more data is becoming available and is being combined which results in a need for data governance - the exercise of authority, control, and shared decision making over the management of data assets. Data governance provides organizations with the ability to ensure that data and information are managed appropriately, providing the right people with the right information at the right time. Despite its importance for achieving data quality, data governance has received scant attention by the scientific community. Research has focused on data governance structures and there has been only limited attention given to the underlying principles. This paper fills this gap and advances the knowledge base of data governance through a systematic review of literature and derives four principles for data governance that can be used by researchers to focus on important data governance issues, and by practitioners to develop an effective data governance strategy and approach.
JUFOID=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
[Context and motivation] In order to build successful software products and services, customer involvement and an understanding of customers’ requirements and behaviours during the development process are essential. [Question/Problem] Although continuous deployment is gaining attention in the software industry as an approach for continuously learning from customers, there is no common overview of the topic yet. [Principal ideas/results] To provide a common overview, we conduct a secondary study that explores the state of reported evidence on customer input during continuous deployment in software engineering, including the potential benefits, challenges, methods and tools of the field. [Contribution] We report on a systematic literature review covering 25 primary studies. Our analysis of these studies reveals that although customer involvement in continuous deployment is highly relevant in the software industry today, it has been relatively unexplored in academic research. The field is seen as beneficial, but there are a number of challenges related to it, such as misperceptions among customers. In addition to providing a comprehensive overview of the research field, we clarify the gaps in knowledge that need to be studied further.
JUFOID=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Partial-order methods alleviate state explosion by considering only a subset of transitions in each constructed state. The choice of the subset depends on the properties that the method promises to preserve. Many methods have been developed ranging from deadlockpreserving to CTL ∗-and divergence-sensitive branching bisimilarity preserving. The less the method preserves, the smaller state spaces it constructs. Fair testing equivalence unifies deadlocks with livelocks that cannot be exited, and ignores the other livelocks. It is the weakest congruence that preserves whether the ability to make progress can be lost. We prove that a method that was designed for trace equivalence also preserves fair testing equivalence. We describe a fast algorithm for computing high-quality subsets of transitions for the method, and demonstrate its effectiveness on a protocol with a connection and data transfer phase. This is the first practical partial-order method that deals with a practical fairness assumption.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Integral volume is an important image representation technique, which is useful in many computer vision applications. Processing integral volumes for large scale 3D datasets is challenging due to high memory requirements. The difficulties lie in efficiently computing, storing, querying and updating the integral volume values. In this work, we address the above problems and present a novel solution for processing integral volumes for large scale 3D datasets efficiently. We propose an octree-based method where the worst-case complexity for querying the integral volume of arbitrary regions is O(log n), here n is the number of nodes in the octree. We evaluate our proposed method on multiresolution LiDAR point cloud data. Our work can serve as a tool to fast extract features from large scale 3D datasets, which can be beneficial for computer vision applications.
EXT="Babahajiani, Pouria"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Systems and services utilizing Internet-of-Things can benefit from dynamically updated software in a significant way. In this paper we show how the most advanced variant of moving code, mobile agents, can be used for operating and managing Internet-connected systems composed of gadgets, sensors and actuators. We believe that the use of mobile agents brings several benefits, for example, mobile agents help to reduce the network load, overcome network latency, and encapsulate protocols. In addition, they can perform autonomous tasks that would otherwise require extensive configuration. The need for moving agents is even more significant if the applications and other factors of the over experience should follow the user to new contexts. When multiple agents are used to provide the user with services, some mechanisms to manage the agents are needed. In the context of Internet-of-Things such management should reflect the physical spaces and other relevant contexts. In this paper we describe the technical solutions used in implementation of the mobile agents, describe two proof concepts and we also compare our solution to related work. We also describe our visions of the future work.
Research output: Contribution to journal › Article › Scientific › peer-review
Wireless standards are evolving rapidly due to the exponential growth in the number of portable devices along with the applications with high data rate requirements. Adaptable software based signal processing implementations for these devices can make the deployment of the constantly evolving standards faster and less expensive. The flagship technology from the IEEE WLAN family, the IEEE 802.11ac, aims at achieving very high throughputs in local area connectivity scenarios. This article presents a software based implementation for the Multiple Input and Multiple Output (MIMO) transmitter and receiver baseband processing conforming to the IEEE 802.11ac standard which can achieve transmission bit rates beyond 1Gbps. This work focuses on the Physical layer frequency domain processing. Various configurations, including 2×2 and 4×4 MIMO are considered for the implementation. To utilize the available data and instruction level parallelism, a DSP core with vector extensions is selected as the implementation platform. Then, the feasibility of the presented software-based solution is assessed by studying the number of clock cycles and power consumption of the different scenarios implemented on this core. Such Software Defined Radio based approaches can potentially offer more flexibility, high energy efficiency, reduced design efforts and thus shorter time-to-market cycles in comparison with the conventional fixed-function hardware methods.
ORG=elt,0.5
ORG=tie,0.5
Research output: Contribution to journal › Article › Scientific › peer-review
In this work, we emphasize the practical importance of mission-critical wireless sensor networks (WSNs) for structural health monitoring of industrial constructions. Due to its isolated and ad hoc nature, this type of WSN deployments is susceptible to a variety of malicious attacks that may disrupt the underlying crucial systems. Along these lines, we review and implement one such attack, named a broadcast storm, where an attacker is attempting to flood the network by sending numerous broadcast packets. Accordingly, we assemble a live prototype of said scenario with real-world WSN equipment, as well as measure the key operational parameters of the WSN under attack, including packet transmission delays and the corresponding loss ratios.We further develop a simple supportive mathematical model based on widely-adopted methods of queuing theory. It allows for accurate performance assessment as well as for predicting the expected system performance, which has been verified with statistical methods.
JUFOID=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Dataflow modeling offers a myriad of tools for designing and optimizing signal processing systems. A designer is able to take advantage of dataflow properties to effectively tune the system in connection with functionality and different performance metrics. However, a disparity in the specification of dataflow properties and the final implementation can lead to incorrect behavior that is difficult to detect. This motivates the problem of ensuring consistency between dataflow properties that are declared or otherwise assumed as part of dataflow-based application models, and the dataflow behavior that is exhibited by implementations that are derived from the models. In this paper, we address this problem by introducing a novel dataflow validation framework (DVF) that is able to identify disparities between an application’s formal dataflow representation and its implementation. DVF works by instrumenting the implementation of an application and monitoring the instrumentation data as the application executes. This monitoring process is streamlined so that DVF achieves validation without major overhead. We demonstrate the utility of our DVF through design and implementation case studies involving an automatic speech recognition application, a JPEG encoder, and an acoustic tracking application.
Research output: Contribution to journal › Article › Scientific › peer-review
JUFOID=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Today, cultural organizations such as museums are seeking new ways to attract and engage audience. Augmented reality based applications are seen very promising. The target is to provide more interactive experiences for an audience with high familiarity of digital interaction. So far, visual presentation has been dominant in augmented reality systems. In contrast to this trend, we have chosen to concentrate on audio augmentation as user generated soundscapes. This paper discusses our approach, focusing on how to design and develop an easy-to-use and smoothly working Android application, which increases user interaction by developing soundscapes from building blocks stored in audio digital asset management system. We have successfully implemented applications for Android platform and evaluated their performance.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Let A be a finite set and B an arbitrary set with at least two elements. The arity gap of a function f : An → B is the minimum decrease in the number of essential variables when essential variables of f are identified. A non- Trivial fact is that the arity gap of such B-valued functions on A is at most |A|. Even less trivial to verify is the fact that the arity gap of B-valued functions on A with more than |A| essential variables is at most 2. These facts ask for a classification of B-valued functions on A in terms of their arity gap. In this paper, we survey what is known about this problem. We present a general characterization of the arity gap of B-valued functions on A and provide explicit classifications of the arity gap of Boolean and pseudo-Boolean functions. Moreover, we reveal unsettled questions related to this topic, and discuss links and possible applications of some results to other subjects of research.
Research output: Contribution to journal › Review Article › Scientific › peer-review
Web runtimes are an essential part of the modern operating systems and their role will further grow in the future. Many web runtime implementations need to support multiple platforms and the design choices are driven by portability instead of optimized use of the underlying hardware. Thus, the implementations do not fully utilize the GPU and other graphics hardware. The consequence is reduced performance and increased power consumption. In this paper, we describe a way to improve the graphical performance of Chromium web runtime dramatically. In addition, the implementation aspects are discussed.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The focus of this paper is on secure cloud service platform for mobile robots ecosystem. Especially the emphasis is based on the scope of open-source software frameworks such as Apache Hadoop which offers numerous possibilities to employ open-source designing tools and deployment models for private cloud computing planning. This paper presents implementation of the OpenCRP (Open CloudRobotic Platform) locally-operated private cloud infrastructure and configuration methods by using Hadoop distributed file system (HDFS) for easing the ecosystem communications set-up in its entirety. For robot teleoperation, ROS (Robot Operating System) is used. The presented ecosystem utilizes security features for autonomous cloud robotic platform, software tools to manage user authentication and methods for large-scale robot-based data management and analysis. In addition to robot trial set-up of robot data storage and sharing, an ecosystem built with two low-cost mobile robots is presented.
JUFOID=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In Hybrid Open Source Software projects, independent and commercially oriented stakeholders collaborate using freely accessible tools and development processes. Here, contributors can enter and leave the community flexibly, which poses a challenge for community managers in ensuring the sustainability of the community. This short paper reports initial results from an industrial case study of the “Qt” Open Source Software project.We present a visual stakeholder analysis approach, building on data from the three systems that provide for the Qt project’s complete software development workflow. This overview, augmented with information about the stakeholders’ organizational affiliations, proved to help the project’s community manager in finding potential for encouraging contributors and to identify issues that can potentially be detrimental for the community.
jufoid=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We will all soon have numerous computing devices we use every day interchangeably. Liquid software, a concept where software is allowed to flow from one computer to another, is a programming framework that aims at simplifying the development and use of such multi-device software. The existing research has discovered three major architecture challenges for liquid software: (1) adaptation of the user interface to different devices, (2) availability of the relevant data in all devices, and (3) transfer of the application state. This paper addresses the last challenge and differs from the earlier work by concentrating in application state that is in the DOM tree, a key element in today’s Web applications.
jufoid=62555
EXT="Voutilainen, Jari-Pekka"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Inefficiency of wireless sensor networks (WSN) in terms of the network lifetime is one of the major reasons preventing their widespread use. To alleviate this problem different data collection approaches have been proposed. One of the promising techniques is to use unmanned aerial vehicle (UAV). In spite of several papers advocating this approach, there have been no system designs and associated performance evaluation proposed to date. In this paper, we address this issue by proposing a new WSN design, where UAV serves as a sink while Bluetooth low energy (BLE) is used as a communication technology. We analyze the proposed design in terms of the network lifetime and area coverage comparing it with routed WSNs. Our results reveal that the lifetime of the proposed design is approximately two orders of magnitude longer than that of the routed WSNs. Using the tools of integral geometry we show that the density of nodes to cover a certain area is approximately two times more for routed WSNs compared to our design.
JUFOID=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In E. coli, transcription repression is essential in cellular functioning. However, its failure rates are non-negligible. We measured the leakiness rate of lacO3O1 promoter with single RNA sensitivity and its temperature dependence in live cells. After finding strong temperature dependence, we dissected the causes. While RNA polymerase numbers and kt, the rate of active transcription, vary weakly with temperature, the repression strength (dependent on number of repressors and binding and unbinding rates of repressors to the promoter) is heavily temperature dependent. We conclude that the lacO3O1 leakiness at low temperatures increases as the repression mechanism’s efficiency hampers.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
To find a fast-track to profitability, a startup needs to streamline and speed up two vital processes – developing novel products and finding new markets for their products. These two goals are typically opposed to each other, business development requiring quick iteration and product development requiring focus on quality. This difference in mindsets, where the focus should be on the balance of quality to the business experimentation causes a conflicting environment for the developers to develop products. This problem is aggravated in a startup environment, where the reasons for product failure are not clear, increasing the frustrations felt by the developers. Clear ways to communicate the product goals and even successes between management and developers is needed to create an environment for success. This balancing act between quality and speed to achieve fast product iteration is the developers dilemma.
JUFOID=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The adoption of model-based testing techniques is hindered by the difficulty of creating a test model. Various techniques to automate the modelling process have been proposed, based on software process artefacts or an existing product. This paper outlines a hybrid approach to model construction, based on two previously proposed methods. The presented approach combines information in pre-existing test cases with a model extracted from the graphical user interface of the product.
JUFOID=62555
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The paper presents the first empirical study to examine econometric time series volatility modeling in the software evolution context. The econometric volatility concept is related to the conditional variance of a time series rather than the conditional mean targeted in conventional regression analysis. The software evolution context is motivated by relating these variance characteristics to the proximity of operating system releases, the theoretical hypothesis being that volatile characteristics increase nearby new milestone releases. The empirical experiment is done with a case study of FreeBSD. The analysis is carried out with 12 time series related to bug tracking, development activity, and communication. A historical period from 1995 to 2011 is covered under a daily sampling frequency. According to the results the time series dataset contains visible volatility characteristics, but these cannot be explained by the time windows around the six observed major FreeBSD releases. The paper consequently contributes to the software evolution research field with new methodological ideas, as well as with both positive and negative empirical results.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper describes the design of traits, abstract superclasses, in the verification-aware programming language Dafny. Although there is no inheritance among classes in Dafny, the traits make it possible to describe behavior common to several classes and to write code that abstracts over the particular classes involved. The design incorporates behavioral specifications for a trait's methods and functions, just like for classes in Dafny. The design has been implemented in the Dafny tool.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper a method has been proposed for estimating the positions of a moving camera attached to a linear positioning system (LPS). By comparing the estimated camera positions with the expected positions, which were calculated based on the LPS specifications, the manufacturer specified accuracy of the system, can be verified. Having this data, one can more accurately model the light field sampling process. The overall approach is illustrated on an inhouse assembled LPS.
AUX=sgn,"Durmush, Ahmed"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The challenge of delivering personalized learning experiences is amplified by the size of classrooms and of online learning communities. In turn, serious games are increasingly recognized for their potential to improve education, but a typical requirement from instructors is to gain insight into how the students are playing. When we bring games into the rapidly growing online learning communities, the challenges multiply and hinder the potential effectiveness of serious games. There is a need to deliver a comprehensive, flexible and intelligent learning framework that facilitates better understanding of learners’ knowledge, effective assessment of their progress and continuous evaluation and optimization of the environments in which they learn. This paper aims to explore the potential in the use of games and learning analytics towards scaffolding and supporting teaching and learning experience. The conceptual model discussed aims to highlight key considerations that may advance the current state of learning analytics, adaptive learning and serious games, by leveraging serious games as an ideal medium for gathering data and performing adaptations. This opportunity has the potential to affect the design and deployment of education and training in the future.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Various full-reference (FR) image quality metrics (indices) that take into account peculiarities of human vision system (HVS) have been proposed during last decade. Most of them have been already tested on several image databases including TID2013, a recently proposed database of distorted color images. Metrics performance is usually characterized by the rank order correlation coefficients of the considered metric and a mean opinion score (MOS). In this paper, we characterize HVS-metrics from another practically important viewpoint. We determine and analyze image statistics such as mean and standard deviation for several state of the art quality metrics on classes of images with multiple or particular types of distortions. This allows setting threshold value(s) for a given metric and application.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
JUFOID=62555
Research output: Book/Report › Anthology › Scientific › peer-review
Key2phone is a mobile access solution which turns mobile phone into a key for electronic locks, doors and gates. In this paper, we elicit and analyse the essential and necessary safety and security requirements that need to be considered for the Key2phone interaction system. The paper elaborates on suggestions/solutions for the realisation of safety and security concerns considering the Internet of Things (IoT) infrastructure. The authors structure these requirements and illustrate particular computational solutions by deploying the Labelled Transition System Analyser (LTSA), a modelling tool that supports a process algebra notation called Finite State Process (FSP). While determining an integrated solution for this research study, the authors point to key quality factors for successful system functionality.
EXT="Chaudhary, Sunil"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper presents a method for statistical analysis of hybrid systems affected by stochastic disturbances, such as random computation and communication delays. The method is applied to the analysis of a computer controlled digital hydraulic power management system, where such effects are present. Bayesian inference is used to perform parameter estimation and we use hypothesis testing based on Bayes factors to compare properties of different variants of the system to assess the impact of different random disturbances. The key idea is to use sequential sampling to generate only as many samples from the models as needed to achieve desired confidence in the result.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Covert channels are a fundamental concept for cryptanalytic side-channel attacks. Covert timing channels use latency to carry data, and are the foundation for timing and cache-timing attacks. Covert storage channels instead utilize existing system bits to carry data, and are not historically used for cryptanalytic side-channel attacks. This paper introduces a new storage channel made available through cache debug facilities on some embedded microprocessors. This channel is then extended to a cryptanalytic side-channel attack on AES software.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this research paper, motivated by the concept of complex hypercube, a novel class of complex Hadamard matrices are proposed. Based on such class of matrices, a novel transform, called complex Hadamard transform is discussed. In the same spirit of this transform, other complex transforms such as complex Haar transform are proposed. It is expected that these novel complex transforms will find many applications. Also, the associated complex valued orthogonal functions are of theoretical interest.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In the Internet of Things some nodes, especially sensors, can be constrained and sleepy, i.e., they spend extended periods of time in an inaccessible sleep state. Therefore, the services they offer may have to be accessed through gateways. Typically this requires that the gateway is trusted to store and transmit the data. However, if the gateway cannot be trusted, the data needs to be protected end-to-end. One way of achieving end-to-end security is to perform a key exchange, and secure the subsequent messages using the derived shared secrets. However, when the constrained nodes are sleepy this key exchange may have to be done in a delayed fashion. We present a novel way of utilizing the gateway in key exchange, without the possibility of it influencing or compromising the exchanged keys. The paper investigates the applicability of existing protocols for this purpose. Furthermore, due to a possible need for protocol translations, application layer use of the exchanged keys is examined.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We present a method for automatically detecting the tips of fluorescently labeled mitochondria. The method is based on a Random Forest classifier, which is trained on small patches extracted from confocal microscope images of U2OS human osteosarcoma cells. We then adopt a particle tracking framework for tracking the detected tips, and quantify the tracking accuracy on simulated data. Finally, from images of U2OS cells, we quantify changes in mitochondrial mobility in response to the disassembly of microtubules via treatment with Nocodazole. The results show that our approach provides efficient tracking of the tips of mitochondria, and that it enables the detection of disease-associated changes in mitochondrial motility.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper studies the idea of using large-scale diversification to protect operating systems and make malware ineffective. The idea is to first diversify the system call interface on a specific computer so that it becomes very challenging for a piece of malware to access resources, and to combine this with the recursive diversification of system library routines indirectly invoking system calls. Because of this unique diversification (i.e. a unique mapping of system call numbers), a large group of computers would have the same functionality but differently diversified software layers and user applications. A malicious program now becomes incompatible with its environment. The basic flaw of operating system monoculture - the vulnerability of all software to the same attacks - would be fixed this way. Specifically, we analyze the presence of system calls in the ELF binaries. We study the locations of system calls in the software layers of Linux and examine how many binaries in the whole system use system calls. Additionally, we discuss the different ways system calls are coded in ELF binaries and the challenges this causes for the diversification process. Also, we present a diversification tool and suggest several solutions to overcome the difficulties faced in system call diversification. The amount of problematic system calls is small, and our diversification tool manages to diversify the clear majority of system calls present in standard-like Linux configurations. For diversifying all the remaining system calls, we consider several possible approaches.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this research paper, a complex valued generalization of associative memory synthesized by Hopfield is considered and it is proved that it is impossible to synthesize such a neural network with desired unitary stable states when the dimension of the network (number of neurons) is odd. The linear algebraic structure of such a neural network is discussed. Using Sylvester construction of Hadamard matrix of suitable dimension, an algorithm to synthesize such a complex Hopfield neural network is discussed. Also, it is discussed how to synthesize real/complex valued associative memories with desired energy landscape (i.e. desired stable states and desired energy values of associated quadratic energy function).
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The paper addresses the problem of gender classification from face images. For feature extraction, we propose discrete Overlapping Block Patterns (OBP), which capture the characteristic structure from the image at various scales. Using integral images, these features can be computed in constant time. The feature extraction at multiple scales results in a high dimensionality and feature redundancy. Therefore, we apply a boosting algorithm for feature selection and classification. Look- Up Tables (LUT) are utilized as weak classifiers, which are appropriate to the discrete nature of the OBP features. The experiments are performed on two publicly available data sets, Labeled Faces in the Wild (LFW) and MOBIO. The results demonstrate that Local Binary Pattern (LBP) features with LUT boosting outperform the commonly used block-histogram-based LBP approaches and that OBP features gain over Multi-Block LBP (MB-LBP) features.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Internet of Things is mainly about connected devices embedded in our everyday environment. Typically, ‘interaction’ in the context of IoT means interfaces which allow people to either monitor or configure IoT devices. Some examples include mobile applications and embedded touchscreens for control of various functions (e.g., heating, lights, and energy efficiency) in environments such as homes and offices. In some cases, humans are an explicit part of the scenario, such as in those cases where people are monitored (e.g., children and elderly) by IoT devices. Interaction in such applications is still quite straightforward, mainly consisting of traditional graphical interfaces, which often leads to clumsy co-existence of human and IoT devices. Thus, there is a need to investigate what kinds of interaction techniques could provide IoT to be more human oriented, what is the role of automation and interaction, and how human originated data can be used in IoT.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Numerous techniques have been developed for text entry by gaze, and similarly, a number of evaluations have been carried out to determine the efficiency of the solutions. However, the results of the published experiments are inconclusive, and it is unclear what causes the difference in their findings. Here we look particularly at the effect of the language used in the experiment. A study where participants entered text both in English and in Finnish does not show an effect of language structure: the entry rates were reasonably close to each other. The role of other explaining factors, such as calibration accuracy and experimental procedure, are discussed.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Mining contrast sequential patterns, which are sequential patterns that characterize a given sequence class and distinguish that class from another given sequence class, has a wide range of applications including medical informatics, computational finance and consumer behavior analysis. In previous studies on contrast sequential pattern mining, each element in a sequence is a single item or symbol. This paper considers a more general case where each element in a sequence is a set of items. The associated contrast sequential patterns will be called itemsetbased distinguishing sequential patterns (itemset-DSP). After discussing the challenges on mining itemset-DSP, we present iDSP-Miner, a mining method with various pruning techniques, for mining itemset-DSPs that satisfy given support and gap constraint. In this study, we also propose a concise border-like representation (with exclusive bounds) for sets of similar itemset-DSPs and use that representation to improve efficiency of our proposed algorithm. Our empirical study using both real data and synthetic data demonstrates that iDSP-Miner is effective and efficient.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper proposes a non-local modification of well-known sigma filter, Nonlocal Sigma filter (NSF), intended to suppress additive white Gaussian noise from images. Similarly to the Nonlocal Mean Filter (NLM), every output pixel value is computed as a nonlocal weighted average of pixels coming from similar patches to the patch around the current pixel. The main difference between the proposed NSF and NLM is in the following: there are pixels in NSF not used in a weighted averaging (if the difference between them and the central pixel value is above a predefined threshold value, and if the distance between patch neighborhood and the central patch neighborhood is greater than a second threshold value). The weights used to estimate the output pixel depend on the patch size as well as on a distance between considered and reference patches. The proposed filter is compared to its counter-parts, namely, the conventional sigma filter and the NLM filter. It is shown that NSF outperforms both of them in PSNR and visual quality metrics values, PSNR-HVS-M and MSSIM. In this paper, a novel filtering quality criterion that takes into account distortions introduced into processed images due to denoising is proposed. It is demonstrated that, according to this criterion, NSF has similar edge-detail preservation property as the conventional sigma filter but has better noise suppression ability.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This article presents a new method for speaker verification, which is based on the non-negative matrix deconvolution (NMD) of the magnitude spectrogram of an observed utterance. In contrast to typical methods known from the literature, which are based on the assumption that the desired signal dominates (for example GMM-UBM, joint factor analysis, i-vectors), compositional models such as NMD describe a recording as a non-negative combination of latent components. The proposed model represents a spectrogram of a signal as a sum of spectrotemporal patterns that span durations of order about 150 ms, while many state of the art automatic speaker recognition systems model a probability distribution of features extracted from much shorter excerpts of speech signal (about 50 ms). Longer patterns carry information about dynamical aspects of modeled signal, for example information about accent and articulation. We use a parametric dictionary in the NMD and the parameters of the dictionary carry information about the speakers’ identity. The experiments performed on the CHiME corpus show that with the proposed approach achieves equal error rate comparable to an i-vector based system.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this work we discuss some difficulties that can be encountered when one uses iterative methods for finding a solution of a onedimensional discrete phase retrieval problem. Iterative methods are widely used but, unfortunately, they often stagnate. We shall show that by using an extended form of the one-dimensional discrete phase retrieval problem, we can find a solution to the problem.
EXT="Rusu, Corneliu"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The requirement specifications are centric in the IS acquisition process, also in public sector. In addition to the regulatory factors multiple stakeholders are often involved in the procurement process. Yet their expertise varies and is often limited to a narrow sector or a specific field. For this paper, we conducted a single case study on an IS acquisition in a middle-sized city. The function nominated a project manager for the project, with little if any prior experience of IS or of their acquisition. The counterpart in the CIO’s office had that knowledge but had little domain knowledge about the requirements. The third party involved was the Procurement and Tendering office. Having specialized in serving the variety of functions in that particular field, the specific areas become inevitably omitted. All three parties argued that their requirements specifications were good, if not great. We observed how such a trident, having reported successful completion of their duties, still missed the point. The tendering resulted in little short of a disaster; two projects were contested, and lost in the market court.
AUX=tlo,"Saarenpää, Iiris"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
As the complexity of rich Web applications grows together with the power and number of Web browsers, the next Web engineering challenge to be addressed is to design and deploy Web applications to make coherent use of all devices. As users nowadays operate multiple personal computers, smart phones, tablets, and computing devices embedded into home appliances or cars, the architecture of current Web applications needs to be redesigned to enable what we call Liquid Software. Liquid Web applications not only can take full advantage of the computing, storage and communication resources available on all devices owned by the end user, but also can seamlessly and dynamically migrate from one device to another continuously following the user attention and usage context. In this paper we address the Liquid Software concept in the context of Web applications and survey to which extent and how current Web technologies can support its novel requirements.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, we extend our previous work to introduce a novel vision-based trajectories planning method for four-wheel-steered mobile robots. Relying only on the overhead camera and by utilizing artificial potential fields and visual servoing concepts, we simultaneously, generate the synchronized trajectories for all wheels in the world coordinates with sufficient number of trajectories midpoints. The synchronized trajectories are used to provide the robot’s kinematic variables and robotinstantaneous-center of rotation to reduce the complexity of the robot kinematic model. Therefore, we plan maximum allowable velocities for all wheels so that at least one of the actuators is always working at maximum velocity. Experiment results are presented to illustrate the efficiency of the proposed method for four-wheel-steered mobile robot called iMoro.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, we present a computational approach for finding complete graph invariants. Specifically, we generate exhaustive sets of connected, non-isomorphic graphs with 9 and 10 vertices and demonstrate that a 97-dimensional multivariate graph invariant is capable to distinguish each of the non-isomorphic graphs. Furthermore, in order to tame the computational complexity of the problem caused by the vast number of graphs, e.g., involving over 10 million networks with 10 vertices, we suggest a low-dimensional, iterative procedure that is based on highly discriminative individual graph invariants. We show that also this computational approach leads to a perfect discrimination. Overall, our numerical results prove the existence of such graph invariants for networks with 9 and 10 vertices. Furthermore, we show that our iterative approach has a polynomial time complexity.
Research output: Contribution to journal › Article › Scientific › peer-review
While car environment is often noisy and driving requires visual attention, still navigation instructions are given with audio and visual feedbacks. By using rhythmic tactons together with audio, navigation task could be supported better in the driving context. In this paper we describe haptic-audio interface with simple two-actuator setup on the wheel using rhythmic tactons for supporting navigation in the car environment. The users who tested the interface with a driving game would choose audio-haptic interface over audio only interface for a real navigation task.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Haptic feedback can improve the usability of gaze gestures in mobile devices. However, the benefit is highly sensitive to the exact timing of the feedback. In practical systems the processing and transmission of signals takes some time, and the feedback may be delayed. We conducted an experiment to determine limits on the feedback delays. The results show that when the delays increase to 200 ms or longer the task completion times are significantly longer than with shorter delays.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The goal of this paper is to present our experience in utilizing the power of the information visualization (InfoVis) field to accelerate the safety analysis process of Component Fault Trees (CFT) in embedded systems. For this, we designed and implemented an interactive visual tool called ESSAVis, which takes the CFT model as input and then calculates the required safety information (e.g., the information on minimal cut sets and their probabilities) that is needed to measure the safety criticality of the underlying system. ESSAVis uses this information to visualize the CFT model and allows users to interact with the produced visualization in order to extract the relevant information in a visual form. We compared ESSAVis with ESSaRel, a tool that models the CFT and represents the analysis results in textual form. We conducted a controlled user evaluation study where we invited 25 participants from different backgrounds, including 6 safety experts, to perform a set of tasks to analyze the safety aspects of a given system in both tools. We compared the results in terms of accuracy, efficiency, and level of user acceptance. The results of our study show a high acceptance ratio and higher accuracy with better performance for ESSAVis compared to the text-based tool ESSaRel. Based on the study results, we conclude that visual-based tools really help in analyzing the CFT model more accurately and efficiently. Moreover, the study opens the door to thoughts about how the power of visualization can be utilized in such domains to accelerate the safety assurance process in embedded systems.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper reviews the current body of empirical research on persuasive technologies (95 studies). In recent years, technology has been increasingly harnessed to persuade and motivate people to engage in various behaviors. This phenomenon has also attracted substantial scholarly interest over the last decade. This review examines the results, methods, measured behavioral and psychological outcomes, affordances in implemented persuasive systems, and domains of the studies in the current body of research on persuasive technologies. The reviewed studies have investigated diverse persuasive systems/designs, psychological factors, and behavioral outcomes. The results of the reviewed studies were categorized into fully positive, partially positive, and negative and/or no effects. This review provides an overview of the state of empirical research regarding persuasive technologies. The paper functions as a reference in positioning future research within the research stream of persuasive technologies in terms of the domain, the persuasive stimuli and the psychological and behavioral outcomes.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, the results of a user experience (UX) goal evaluation study are reported. The study was carried out as a part of a research and development project of a novel remote operator station (ROS) for container gantry crane operation in port yards. The objectives of the study were both to compare the UXs of two different user interface concepts and to give feedback on how well the UX goals experience of safe operation, sense of control, and feeling of presence are fulfilled with the developed ROS prototype. According to the results, the experience of safe operation and feeling of presence were not supported with the current version of the system. However, there was much better support for the fulfilment of the sense of control UX goal in the results. Methodologically, further work is needed in adapting the utilized Usability Case method to suit UX goal evaluation better.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
[Context] With the increasing industrial demands for seamless exchange of data and services among information systems, architectural solutions are a promising research direction which supports high levels of interoperability at early development stages. [Objectives] This research aims at identifying the architectural problems and before-release solutions of interoperability on its different levels in information systems, and exploring the interoperability metrics and research methods used to evaluate identified solutions. [Methods] We performed a scoping study in five digital libraries and descriptively analyzed the results of the selected studies. [Results] From the 22 studies included, we extracted a number of architectural interoperability problems on the technical, syntactical, semantic, and pragmatic levels. Many problems are caused by systems' heterogeneity on data representation, meaning or context. The identified solutions include standards, ontologies, wrappers, or mediators. Evaluation methods to validate solutions mostly included toy examples rather than empirical studies. [Conclusions] Progress has been made in the software architecture research area to solve interoperability problems. Nevertheless, more researches need to be spent on solutions for the higher levels of interoperability accompanied with proper empirical evaluation for their effectiveness and usefulness.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Wireless sensor networks (WSNs) are being deployed at an escalating rate for various application fields. The ever growing number of application areas requires a diverse set of algorithms with disparate processing needs. WSNs also need to adapt to prevailing energy conditions and processing requirements. The preceding reasons rule out the use of a single fixed design. Instead, a general purpose design that can rapidly be adapted to different conditions and requirements is desired. In lieu of the traditional inflexible wireless sensor node consisting of a separate micro-controller, radio transceiver, sensor array and energy storage, we propose a unified rapidly reconfigurable miniature sensor node, implemented with a transport triggered architecture processor on a low-power Flash FPGA. To our knowledge, this is the first study of its kind. The proposed approach does not solely concentrate on energy efficiency but a high emphasis is also put on the ease of development perspective. Power consumption and silicon area usage comparison based on solutions implemented using our novel rapid design approach for wireless sensor nodes are performed. The comparison is performed between 16-bit fixed point, 16-bit floating point and 32-bit floating point implementations. The implemented processors and algorithms are intended for rolling bearing condition monitoring, but can be fully extended for other applications as well.
Research output: Contribution to journal › Article › Scientific › peer-review
Frequent closed sequential pattern mining plays an important role in sequence data mining and has a wide range of applications in real life, such as protein sequence analysis, financial data investigation, and user behavior prediction. In previous studies, a user predefined gap constraint is considered in frequent closed sequential pattern mining as a parameter. However, it is difficult for users, who are lacking sufficient priori knowledge, to set suitable gap constraints. Furthermore, different gap constraints may lead to different results, and some useful patterns may be missed if the gap constraint is chosen inappropriately. To deal with this, we present a novel problem of mining frequent closed sequential patterns with non-user-defined gap constraints. In addition, we propose an efficient algorithm to find the frequent closed sequential patterns with the most suitable gap constraints. Our empirical study on protein data sets demonstrates that our algorithm is effective and efficient.
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper, we propose an extension of the ELM algorithm that is able to exploit multiple action representations. This is achieved by incorporating proper regularization terms in the ELM optimization problem. In order to determine both optimized network weights and action representation combination weights, we propose an iterative optimization process. The proposed algorithm has been evaluated by using the state-of-the-art action video represe