We report on the development of a high-power vertical-external-cavity surface-emitting laser (VECSEL) emitting around 1180 nm. The laser emitted 50 W of output power when the mount of the gain chip was cooled to -15°C. The output power was measured using a 97% reflective cavity end-mirror. The VECSEL was arranged to form an I-shaped cavity with a length of ∼100 mm; the gain chip and a curved dielectric mirror (RoC=150) acting as cavity end mirrors. The gain chip was grown by molecular beam epitaxy (MBE) and incorporated 10 GaInAs/GaAs quantum wells. For efficient heat extraction, the chip was capillary bonded to a diamond heat spreader which was attached to a TEC-cooled copper mount. The maximum optical-to-optical conversion efficiency of 28% was achieved for 42 W of output power and -15°C mount temperature.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We present a U-bend design for traveling wave III-V gain devices, such as semiconductor optical amplifiers and laser diodes. The design greatly simplifies the butt-coupling between the III-V chip and silicon-on-insulator photonic circuit by bringing the I/O ports on one facet. This removes the need for precise dimension control otherwise required for 2-side coupling, therefore increasing the yield of mounted devices towards 100%. The design, fabrication and characterization of the U-bend device based on Euler bend geometry is presented. The losses for a bend with a minimum bending radius of 83 μm are 1.1 dB. In addition, we present an analysis comparing the yield and coupling losses of the traditionally cleaved devices with the results that the Euler bend approach enable, with the final conclusion that the yield is improved by several times while the losses are decreased by several dB.
jufoid=71479
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Optically-pumped vertical external cavity surface emitting lasers (VECSELs) based on flip-chip gain mirrors emitting at the 1.55-μm wavelength range are reported. The gain mirrors employ wafer-fused InAlGaAs/InP quantum well heterostructures and GaAs/AlAs distributed Bragg reflectors, which were incorporated in a linear and a V-cavity configurations. A maximum output power of 3.65 W was achieved for a heatsink temperature of 11°C and employing a 2.2% output coupler. The laser exhibited circular beam profiles for the full emission power range. The demonstration represents more than 10-fold increase of the output power compared to state-of-the-art flip-chip VECSELs previously demonstrated at the 1.55-μm wavelength range, and opens a new perspective for developing practical VECSEL-based laser system for applications such as LIDAR, spectroscopy, communications and distributed sensing.
jufoid=71479
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
An experimental investigation on the inkjetprinted power harvester for 2.4GHz and review of RF characterization of substrate and printed conductors are presented in this paper. A one stage discrete rectifier based on a voltage doubler structure and a planar monopole antenna are fabricated on cardboard using inkjet printing. The performance of the whole system is examined by measuring the output voltage of the RF power harvester. By the utilization of the proposed idea, the fabrication of low-cost environmentally-friendly battery-less wireless modules is conceivable.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A pure Ce-doped silica fiber is fabricated using modified chemical vapor deposition (MCVD) technique. Fluorescence characteristics of a Ce-doped silica fiber are experimentally investigated with continuous wave pumping from 440 nm to 405 nm. Best pump absorption and broad fluorescence spectrum is observed for ∼ 405 nm laser. Next, the detailed analysis of spectral response as a function of pump power and fiber length is performed. It is observed that a-10dB spectral width of ∼ 280 mn can be easily achieved with different combinations of the fiber length and pump power. Lastly, we present, for the first time to the best of our knowledge, a broadband fluorescence spectrum with-10dB spectral width of 301 nm, spanning from ∼ 517.36 nm to ∼ 818 nm, from such fibers with non-UV pump lasers.
jufoid=71479
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
IoT applications constitute one of the fastest developing areas in today's technology and at the same time pose the most demanding challenges for the respective radio access network design. While the initial studies in IoT were focused primarily on scaling the existing radio solutions for higher numbers of small-data and low-cost sensors, the current developments aim at supporting wearable augmented/virtual reality platforms, moving industrial robots, driving (semi-)autonomous vehicles, and flying drones, which produce large amounts of data. To satisfy these rapidly growing performance demands, the 5G-grade IoT is envisioned to increasingly employ millimeter-wave (mmWave) spectrum, where wider bandwidths promise to enable higher data rates and low-latency communication. While the mainstream trend in mmWave-based IoT is to rely on licensed bands around 28 GHz or leverage unlicensed bands at 60 GHz, in this work we introduce a conceptual vision for the integrated use of these frequencies within a single radio access system named 5G over unlicensed spectrum or 5G-U. We study the performance of 5G-U in supporting stringent IoT use cases, discuss and compare the alternative strategies for spectrum management in 5G-U, and demonstrate that a harmonized utilization of licensed and unlicensed bands provides notable performance improvements in both device-centric and network-centric metrics. Finally, we offer useful guidelines for future implementations of 5G-U and detail its potential applications in the area of advanced IoT services.
Research output: Contribution to journal › Article › Scientific › peer-review
Tm,Ho co-doped disordered calcium niobium gallium garnet (CNGG) crystals are investigated as a novel gain medium for mode-locked lasers near 2 μm. With a GaSb-based semiconductor saturable absorber mirror (SESAM) and chirped mirrors for dispersion compensation such a laser is mode-locked at a repetition rate of 89.3 MHz. For a 5% output coupler, a maximum average output power of 157 mW is obtained with a pulse duration of 170 fs (28-nm broad spectrum centered at 2.075 μm, leading to a time-bandwidth product of 0.331). With a 0.5% output coupler, 73-fs pulses are generated at 2.061 μm with a spectral width of 62 nm (time-bandwidth product of 0.320) and an average output power of 36 mW.
jufoid=71479
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Tracking the location of people and their mobile devices creates opportunities for new and exciting ways of interacting with public technology. For instance, users can transfer content from public displays to their mobile device without touching it, because location tracking allows automatic recognition of the target device. However, many uncertainties remain regarding how users feel about interactive displays that track them and their mobile devices, and whether their experiences vary based on the setting. To close this research gap, we conducted a 24-participant user study. Our results suggest that users are largely willing - even excited - to adopt novel location-tracking systems. However, users expect control over when and where they are tracked, and want the system to be transparent about its ownership and data collection. Moreover, the deployment setting plays a much bigger role on people's willingness to use interactive displays when location tracking is involved.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, we present a computational approach for finding complete graph invariants. Specifically, we generate exhaustive sets of connected, non-isomorphic graphs with 9 and 10 vertices and demonstrate that a 97-dimensional multivariate graph invariant is capable to distinguish each of the non-isomorphic graphs. Furthermore, in order to tame the computational complexity of the problem caused by the vast number of graphs, e.g., involving over 10 million networks with 10 vertices, we suggest a low-dimensional, iterative procedure that is based on highly discriminative individual graph invariants. We show that also this computational approach leads to a perfect discrimination. Overall, our numerical results prove the existence of such graph invariants for networks with 9 and 10 vertices. Furthermore, we show that our iterative approach has a polynomial time complexity.
Research output: Contribution to journal › Article › Scientific › peer-review
With an increasing number of service providers in the cloud market, the competition between these is also increasing. Each provider attempts to attract customers by providing a high quality service with lowest possible cost and at the same time trying to make profit. Often, cloud resources are advertised and brokered in a spot market style, i.e., traded for immediate delivery. This paper proposes an architecture for a brokerage model specifically for multi-cloud resource spot markets that integrates the resource brokerage function across several cloud providers. We use a tuple space architecture to facilitate coordination. This architecture supports specifically multiple cloud providers selling unused resources in the spot market. To support the matching process by finding the best match between customer requirements and providers, offers are matched with regard the lowest possible cost available for the customer in the market at the time of the request. The key role of this architecture is to provide the coordination techniques built on a tuple space, adapted to the cloud spot market.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Complex systems of different nature self-organize using common mechanisms. One of those is increase of their efficiency. The level of organization of complex systems of different nature can be measured as increased efficiency of the product of time and energy for an event, which is the amount of physical action consumed by it. Here we apply a method developed in physics to study the efficiency of biological systems. The identification of cellular objectives is one of the central topics in the research of microbial metabolic networks. In particular, the information about a cellular objective is needed in flux balance analysis which is a commonly used constrained-based metabolic network analysis method for the prediction of cellular phenotypes. The cellular objective may vary depending on the organism and its growth conditions. It is probable that nutritionally scarce conditions are very common in the nature, and, in order to survive in those conditions, cells exhibit various highly efficient nutrient-processing systems like enzymes. In this study, we explore the efficiency of a metabolic network in transformation of substrates to new biomass, and we introduce a new objective function simulating growth efficiency. We are searching for general principles of self-organization across systems of different nature. The objective of increasing efficiency of physical action has been identified previously as driving systems toward higher levels of self-organization. The flow agents in those networks are driven toward their natural state of motion, which is governed by the principle of least action in physics. We connect this to a power efficiency principle. Systems structure themselves in a way to decrease the average amount of action or power per one event in the system. In this particular example, action efficiency is examined in the case of growth efficiency of E. coli. We derive the expression for growth efficiency as a special case of action (power) efficiency to justify it through first principles in physics. Growth efficiency as a cellular objective of E. coli coincides with previous research on complex systems and is justified by first principles in physics. It is expected and confirmed outcome of this work. We examined the properties of growth efficiency using a metabolic model for Escherichia coli. We found that the maximal growth efficiency is obtained at a finite nutrient uptake rate. The rate is substrate dependent and it typically does not exceed 20 mmol/h/gDW. We further examined whether the maximal growth efficiency could serve as a cellular objective function in metabolic network analysis and found that cellular growth in batch cultivation can be predicted reasonably well under this assumption. The fit to experimental data was found slightly better than with the commonly used objective function of maximal growth rate. Based on our results, we suggest that the maximal growth efficiency can be considered a plausible optimization criterion in metabolic modeling for E. coli. In the future, it would be interesting to study growth efficiency as an objective also in other cellular systems and under different cultivation conditions.
jufoid=84878
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper describes a framework for action recognition which aims to recognize the goals and activities of one or more human from a series of observations. We propose an approach for the human action recognition based on the 3D dense micro-block difference. The proposed algorithm is a two-stage procedure: (a) image preprocessing using a 3D Gabor filter and (b) a descriptor calculation using 3D dense micro-block difference with SVM classifier. At the first step, an efficient spatial computational scheme designed for the convolution with a bank of 3D Gabor filters is present. This filter intensifies motion using a convolution for a set of 3D patches and arbitrarily-oriented anisotropic Gaussian. For preprocessed frames, we calculate the local features such as 3D dense micro-block difference (3D DMD), which capture the local structure from the image patches at high scales. This approach is processing the small 3D blocks with different scales from frames which capture the microstructure from it. The proposed image representation is combined with fisher vector method and linear SVM classifier. We evaluate the proposed approach on the UCF50, HMDB51 and UCF101 databases. Experimental results demonstrate the effectiveness of the proposed approach on video with a stochastic textures background with comparisons of the state-of-The-Art methods.
jufoid=71479
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The purpose of this study is to introduce a new type of activation game and evaluate the attitudes and user experiences of Chinese elderly people. The game controlling is done with a specific 3D-printed handle and is based on an acceleration sensor. The developed activation game, which requires both cognitive and motor skills was tested with test groups in three Chinese eldercare homes. The game was played by the residents and user feedback was collected by researchers' observations and players' comments in the gaming event. The most significant finding was the positive user experience of the elderly and the experience of the game being both cognitively stimulating and supportive for player activation. The game controller handle was found to be convenient for elderly people as it supports active use of hands, which was seen important by the players. Based on the observations, the developed game also seemed to provide great potential for social interaction. However, also some challenges were noticed, related to the game controller handle and game implementation. These positive finding as well as the discovered challenges are reported in this study. As a conclusion, the results are a strong encouragement for continuing activation game development for older adults.
EXT="Merilampi, Sari"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The cross-directional (CD) basis weight control on paper machines is improved by optimizing the path of the scanning measurement. The optimal path results from an LQG problem and depends on how the uncertainty of the present estimate of the basis weight and the intensity of process noise vary in CD. These factors are assessed by how accurately the CD basis weight estimate predicts the measured optical transmittance with a linear adaptive model on synchronized basis weight and transmittance data. Simulations on optimized scanner path in disturbance scenarios are presented, and the practical implementation of scanner control is discussed.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper proposes, for the first time without using any linearization or order reduction, an adaptive and model-based discharge pressure control design for the variable displacement axial piston pumps (VDAPPs), whose dynamical behaviors are highly nonlinear and can be described by a fourth-order differential equation. The rigorous stability proof, with an asymptotic convergence, is given for the entire system. In the proposed novel controller design method, the specifically designed stabilizing terms constitute an essential core to cancel out all the stability-preventing terms. The experimental results reveal that rapid parameter adaptation significantly improves the feedback signal tracking precision compared to a known-parameter controller design. In the comparative experiments, the adaptive controller design demonstrates the state-of-the-art discharge pressure control performance, enabling a possibility for energy consumption reductions in hydraulic systems driven with VDAPP.
Research output: Contribution to journal › Article › Scientific › peer-review
The compressed sensing (CS) theory shows that a sparse signal can be recovered at a sampling rate that is (much) lower than the required Nyquist rate. In practice, many image signals are sparse in a certain domain, and because of this, the CS theory has been successfully applied to the image compression in the past few years. The most popular CS-based image compression scheme is the block-based CS (BCS). In this paper, we focus on the design of an adaptive sampling mechanism for the BCS through a deep analysis of the statistical information of each image block. Specifically, this analysis will be carried out at the encoder side (which needs a few overhead bits) and the decoder side (which requires a feedback to the encoder side), respectively. Two corresponding solutions will be compared carefully in our work. We also present experimental results to show that our proposed adaptive method offers a remarkable quality improvement compared with the traditional BCS schemes.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The task of additional lossless compression of JPEG images is considered. We propose to decode JPEG image and recompress it using lossy BPG (Better Portable Graphics) codec based on a subset of the HEVC open video compression standard. Then the decompressed and smoothed BPG image is used for calculation and quantization of DCT coefficients in 8x8 image blocks using quantization tables of the source JPEG image. A difference between obtained quantized DCT coefficients and quantized DCT coefficients of the source JPEG image (prediction error) is calculated. The difference is lossless compressed by a proposed context modeling and arithmetical coding. In this way the source JPEG image is replaced by two files: compressed BPG image and the compressed difference which needed for lossless restoration of the source JPEG image. It is shown that the proposed approach provides compression ratios comparable with state of the art PAQ8, WinZip and STUFFIT file archivers. At the same time BPG images may be used for fast preview of compressed JPEG images.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Development of multimedia systems on heterogeneous platforms is a challenging task with existing design tools due to a lack of rigorous integration between high level abstract modeling, and low level synthesis and analysis. In this paper, we present a new dataflow-based design tool, called the targeted dataflow interchange format (TDIF), for design, analysis, and implementation of embedded software for multimedia systems. Our approach provides novel capabilities, based on the principles of task-level dataflow analysis, for exploring and optimizing interactions across application behavior; operational context; heterogeneous platforms, including high performance embedded processing architectures; and implementation constraints.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A gradient-based method has been developed and programmed to optimize the NH (Formula presented.) injections of an existing biomass-fired bubbling fluidized bed boiler, the targets being to minimize both the NO and the NH (Formula presented.) emissions. In this context, the reactive flow inside the boiler is modelled using a custom-built OpenFOAM (Formula presented.) solver, and then the NO and NH (Formula presented.) species are calculated using a post-processing technique. The multiobjective optimization problem is solved by optimizing several weight combinations of the objectives using the gradient-projection method. The required sensitivities were calculated by differentiating the post-processing solver according to the discrete adjoint method. The adjoint-based sensitivities are validated against finite differences calculations. Moreover, in order to evaluate the optimization results, the optimization problem is solved using evolutionary algorithms software. Finally, the optimization results are physically interpreted and the strengths and weaknesses of the proposed method are discussed.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper explores advanced electrode modeling in the context of separate and parallel transcranial electrical stimulation (tES) and electroencephalography (EEG) measurements.We focus on boundary condition based approaches that do not necessitate adding auxiliary elements, e.g. sponges, to the computational domain. In particular, we investigate the complete electrode model (CEM) which incorporates a detailed description of the skin-electrode interface including its contact surface, impedance and normal current distribution. The CEM can be applied for both tES and EEG electrodes which is advantageous when a parallel system is used. In comparison to the CEM, we test two important reduced approaches: the gap model (GAP) and the point electrode model (PEM). We aim to find out the differences of these approaches for a realistic numerical setting based on the stimulation of the auditory cortex. The results obtained suggest, among other things, that GAP and GAP/PEM are sufficiently accurate for the practical application of tES and parallel tES/EEG, respectively. Differences between CEM and GAP were observed mainly in the skin compartment, where only CEM explains the heating effects characteristic to tES.
Research output: Contribution to journal › Article › Scientific › peer-review
Context: several companies, particularly Small and Medium Sized Enterprises (SMEs), often face software maintenance issues due to the lack of Software Quality Assurance (SQA). SQA is a complex task that requires a lot of effort and expertise, often not available in SMEs. Several SQA models, including maintenance prediction models, have been defined in research papers. However, these models are commonly defined as "one-size-fits-All" and are mainly targeted at the big industry, which can afford software quality experts who undertake the data interpretation tasks. Objective: in this work, we propose an approach to continuously monitor the software operated by end users, automatically collecting issues and recommending possible fixes to developers. The continuous exception monitoring system will also serve as knowledge base to suggest a set of quality practices to avoid (re)introducing bugs into the code. Method: first, we identify a set of SQA practices applicable to SMEs, based on the main constraints of these. Then, we identify a set of prediction techniques, including regressions and machine learning, keeping track of bugs and exceptions raised by the released software. Finally, we provide each company with a tailored SQA model, automatically obtained from companies' bug/issue history. Developers are then provided with the quality models through a set of plug-ins for integrated development environments. These suggest a set of SQA actions that should be undertaken, in order to maintain a certain quality level and allowing to remove the most severe issues with the lowest possible effort. Conclusion: The collected measures will be made available as public dataset, so that researchers can also benefit of the project's results. This work is developed in collaboration with local SMEs and existing Open Source projects and communities.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, a design-stage failure identification framework is proposed using a modeling and simulation approach based on Dimensional Analysis and qualitative physics. The proposed framework is intended to provide a new approach to model the behavior in the Functional-Failure Identification and Propagation (FFIP) framework, which estimates potential faults and their propagation paths under critical event scenarios. The initial FFIP framework is based on combining hierarchical system models of functionality and configuration, with behavioral simulation and qualitative reasoning. This paper proposes to develop a behavioral model derived from information available at the configuration level. Specifically, the new behavioral model uses design variables, which are associated with units and quantities (i.e., Mass, Length, Time, etc...). The proposed framework continues the work to allow the analysis of functional failures and fault propagation at a highly abstract system concept level before any potentially high-cost design commitments are made. The main contribution in this paper consists of developing component behavioral models based on the combination of fundamental design variables used to describe components and their units or quantities, more precisely describing components' behavior. Copyright © 2010 by ASME.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We have created an ultralight, movable, "immaterial" fogscreen. It is based on the fogscreen mid-air imaging technology. The hand-held unit is roughly the size and weight of an ordinary toaster. If the screen is tracked, it can be swept in the air to create mid-air slices of volumetric objects, or to show augmented reality (AR) content on top of real objects. Interfacing devices and methodologies, such as hand and gesture trackers, camera-based trackers and object recognition, can make the screen interactive. The user can easily interact with any physical object or virtual information, as the screen is permeable. Any real objects can be seen through the screen, instead of e.g., through a video-based augmented reality screen. It creates a mixed reality setup where both the real world object and the augmented reality content can be viewed and interacted with simultaneously. The hand-held mid-air screen can be used e.g., as a novel collaborating or classroom tool for individual students or small groups.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this article we intoduce a novel stochastic Hebb-like learning rule for neural networks that is ueurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the preand postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.
Research output: Contribution to journal › Article › Scientific › peer-review
A MECSEL emitting around 825nm is reported. With a tuning range from 807nm to 840 nm, the MECSEL extends the coverage of high beam quality semiconductor based lasers in the short 8XXnm region and opens new perspectives for scanning ground-based water-vapor differential absorption lidar. 1.4W maximum output power has been achieved at room temperature operation and at 12.5W absorbed power using a 532 nm emitting pump laser. The beam quality has been investigated by M2 measurements at different pump power. The effect from a growing pump mode and thermal lensing has been observed as the beam divergence angle decreases and the beam waist radius enlargens with increasing pump power.
INT=phys,"Rajala, Patrik"
jufoid=71479
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Finite element methods have been shown to achieve high accuracies in numerically solving the EEG forward problem and they enable the realistic modeling of complex geometries and important conductive features such as anisotropic conductivities. To date, most of the presented approaches rely on the same underlying formulation, the continuous Galerkin (CG)-FEM. In this article, a novel approach to solve the EEG forward problem based on a mixed finite element method (Mixed-FEM) is introduced. To obtain the Mixed-FEM formulation, the electric current is introduced as an additional unknown besides the electric potential. As a consequence of this derivation, the Mixed-FEM is, by construction, current preserving, in contrast to the CG-FEM. Consequently, a higher simulation accuracy can be achieved in certain scenarios, e.g., when the diameter of thin insulating structures, such as the skull, is in the range of the mesh resolution. A theoretical derivation of the Mixed-FEM approach for EEG forward simulations is presented, and the algorithms implemented for solving the resulting equation systems are described. Subsequently, first evaluations in both sphere and realistic head models are presented, and the results are compared to previously introduced CG-FEM approaches. Additional visualizations are shown to illustrate the current preserving property of the Mixed-FEM. Based on these results, it is concluded that the newly presented Mixed-FEM can at least complement and in some scenarios even outperform the established CG-FEM approaches, which motivates a further evaluation of the Mixed-FEM for applications in bioelectromagnetism.
Research output: Contribution to journal › Article › Scientific › peer-review
The C2NET project aims to provide cloud-based platform for the supply chain interactions. The architecture of such platform includes a Data Collection Framework (DCF) for managing the collection of the company's data. The DCF collects, transforms and stores data from both Internet of Things (IoT) devices in the factory shopfloor and company enterprises data via two types of hub; Legacy system hub (LSH) and IoT hub. Since the C2NET, targets the Small and Medium-sized Enterprises (SMEs), the enterprise data, or legacy data as called in the C2NET project, can be provided via excel files. Thus, this research work highlights a technique for processing the excel files in the LSHs. This technique adopts the concept of Multi-Agent Systems for processing the data as table in the excel files in the LSH. The multi-agent approach allows the LSH to process any excel file regardless the complexity in the data structure or in the file table. Furthermore, the presented approach enhances the processing of the excel files in different aspects, such as the size of the excel file or the required processing power.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Internet access has become commonplace in the modern world. As the number of users and the amount of data traffic in the Internet keep rising exponentially, while the requirements of novel applications are becoming more stringent, there is a clear need for new networking solutions. Therefore, one of the key concepts in resolving the challenges of the upcoming 5G era of communications will be represented by multi-radio heterogeneous networks, where the users can gain benefits by either being connected to multiple different radio technologies simultaneously or seamlessly changing from one network to another based on their needs. In this work, we propose a multi-purpose automated vehicular platform prototype equipped with multiple radio access technologies, which was constructed to demonstrate the potential performance gains provided by the use of multi-radio heterogeneous networks in terms of network throughput, latency, and reliability. We discuss the potential drawbacks of using multiple radio interfaces at the same time. The constructed vehicular platform prototype constitutes a flexible research framework for communications technology within heterogeneous networks and becomes helpful for supporting future use cases of industrial IoT applications.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Purpose - The paper aims to evaluate the knowledge-based urban development (KBUD) dynamics of a rapidly emerging knowledge city-region, Tampere region, Finland. Design/methodology/approach - The paper empirically investigates Tampere region's development achievements and progress from the knowledge perspective. Findings - The research, through qualitative and quantitative analyses, reveals the regional development strengths, weaknesses, opportunities and threats of Tampere region. Originality/value - The paper provides useful suggestions based on the lessons learned from the Tampere case investigation that could shed light on the KBUD journey of city-regions.
EXT="Lönnqvist, Antti"
Research output: Contribution to journal › Article › Scientific › peer-review
Purpose - Recent investigations on magnetic properties of non-oriented (NO) steel sheets enhance the comprehension of the magnetic anisotropy behaviour of widely employed electrical sheets. The concept of energy/coenergy density can be employed to model these magnetic properties. However, it usually presents an implicit form which requires an iterative process. The purpose of this paper is to develop an analytical model to consider these magnetic properties with an explicit formulation in order to ease the computations. Design/methodology/approach - From rotational measurements, the anhysteretic curves are interpolated in order to extract the magnetic energy density for different directions and amplitudes of the magnetic flux density. Furthermore, the analytical representation of this energy is suggested based on statistical distribution which aims to minimize the intrinsic energy of the material. The model is finally validated by comparing measured and computed values of the magnetic field strength. Findings - The proposed model is based on an analytical formulation of the energy depending on the components of the magnetic flux density. This formulation is composed of three Gumbel distributions. Every functional parameters of energy density is formulated with only four parameters which are calculated by fitting the energy extracted from measurements. Finally, the proposed model is validated by comparing the computation and the measurements of 9 H loci for NO steel sheets at 10 Hz. The proposed analytical model shows good agreements with an average relative error of 27 per cent. Originality/value - The paper presents an original analytical method to model magnetic anisotropy for NO electrical sheets. With this analytical formulation, the determination of H does not require any iterative process as it is usually the case with this energy method coupled with implicit function. This method can be easily incorporated in finite element method since it does not require any extra iterative process.
Research output: Contribution to journal › Article › Scientific › peer-review
Directional deafness problem is one of the most important challenges in beamforming-based channel access at mmWave frequencies, which is believed to have detrimental effects on system performance in form of excessive delays and significant packet drops. In this paper, we contribute a quantitative analysis of deafness in directional random access systems operating in unlicensed bands by relying on stochastic geometry formulations. We derive a general numerical approach that captures the behavior of deafness probability as well as provide a closed-form solution for a typical sector-shaped antenna model, which is then may be extended to a more realistic two-sector pattern. Finally, employing contemporary IEEE 802.11ad modeling numerology, we illustrate our analysis revealing the importance of deafness-related considerations and their system- level impact.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Cloud-enabled tools developed in the Cloud Collaborative Manufacturing Networks (C2NET) project address the needs of small and medium enterprises with respect to information exchange and visibility across the collaboration partners in the supply network, coupled with automated and collaborative production planning and supply management. This paper analyses a case of an oil lubrication and hydraulic systems manufacturer and describes a pilot application of C2NET where the production schedule is optimized according to the priorities of the pilot company. In this case the goal is a highly adaptive just-in-time manufacturing schedule with guaranteed on time delivery.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper presents an improved version of a recent state-of-the-art texture descriptor called Gaussians of Local Descriptors (GOLD), which is based on a multivariate Gaussian that models the local feature distribution that describes the original image. The full rank covariance matrix, which lies on a Riemannian manifold, is projected on the tangent Euclidean space and concatenated to the mean vector for representing a given image. In this paper, we test the following features for describing the original image: scale-invariant feature transform (SIFT), histogram of gradients (HOG), and weber's law descriptor (WLD). To improve the baseline version of GOLD, we describe the covariance matrix using a set of visual features that are fed into a set of Support Vector Machines (SVMs). The SVMs are combined by sum rule. The scores obtained by an SVM trained using the original GOLD approach and the SVMs trained with visual features are then combined by sum rule. Experiments show that our proposed variant outperforms the original GOLD approach. The superior performance of the proposed system is validated across a large set of datasets. Particularly interesting is the performance obtained in two widely used person re-identification datasets, CAVIAR4REID and IAS, where the proposed GOLD variant is coupled with a state-of-the-art ensemble to obtain an improvement of performance on these two datasets. Moreover, we performed further tests that combine GOLD with non-binary features (local ternary/quinary patterns) and deep transfer learning. The fusion among SVMs trained with deep features and the SVMs trained using the ternary/quinary coding ensemble is demonstrated to obtain a very high performance across datasets. The MATLAB code for the ensemble of classifiers and for the extraction of the features will be publicly available1 to other researchers for future comparisons.
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper, a model predictive control strategy is adapted to the cascaded H-bridge (CHB) multilevel rectifier. The proposed control scheme aims to keep the sinusoidal input current in phase with the supply voltage and to achieve independent voltage regulation of the H-bridge cells. To do so, the switches are directly manipulated without the need of a modulator. Furthermore, since all the possible switching combinations are taken into account, the controller exhibits favorable performance not only under nominal conditions but also under asymmetrical voltage potentials and unbalanced loads. Finally, a short horizon is employed in order to ensure robustness; this way, the required computational effort remains reasonable, making it possible to implement the algorithm in a real-time system. Experimental results obtained from a two-cell CHB rectifier are presented in order to demonstrate the performance of the proposed approach.
Research output: Contribution to journal › Article › Scientific › peer-review
Several major advances in Cell and Molecular Biology have been made possible by recent advances in livecell microscopy imaging. To support these efforts, automated image analysis methods such as cell segmentation and tracking during a time-series analysis are needed. To this aim, one important step is the validation of such image processing methods. Ideally, the "ground truth" should be known, which is possible only by manually labelling images or in artificially produced images. To simulate artificial images, we have developed a platform for simulating biologically inspired objects, which generates bodies with various morphologies and kinetics and, that can aggregate to form clusters. Using this platform, we tested and compared four tracking algorithms: Simple Nearest-Neighbour (NN), NN with Morphology and two DBSCAN-based methods. We show that Simple NN works well for small object velocities, while the others perform better on higher velocities and when clustering occurs. Our new platform for generating new benchmark images to test image analysis algorithms is openly available at (http://griduni.uninova.pt/Clustergen/ClusterGen-v1.0.zip).
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Modern day systems often require reconfigurability in the operating parameters of the transmit and receive antennas, such as the resonant frequency, radiation pattern, impedance, or polarization. In this work a novel approach to antenna reconfigurability is presented by integrating antennas with the ancient art of origami. The proposed antenna consists of an inkjet printed center-fed spiral antenna, which is designed to resonate at 1.0GHz and have a reconfigurable radiation pattern while maintaining the 1.0GHz resonance with little variation in input impedance. When flat, the antenna is a planar spiral exhibiting a bidirectional radiation pattern. By a telescoping action, the antenna can be reconfigured into a conical spiral with a directional pattern and higher gain, which gives the antenna a large front-to-back ratio. Construction of the antenna in this manner allows for a simple, lightweight, transportable antenna that can expand to specifications in the field.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
With the increasing need for accurate mining and classification from multimedia data content, and the growth of such multimedia applications in mobile and distributed architectures, stream mining systems require increasing amounts of flexibility, extensibility, and adaptivity for effective deployment. To address this challenge, we propose a novel approach that rigorously integrates foundations of dataflow modeling for high level signal processing system design, and adaptive stream mining based on dynamic topologies of classifiers. In particular, we introduce a new design environment, called the lightweight dataflow for dynamic data driven application systems (LiD4E) environment. LiD4E provides formal semantics, rooted in dataflow principles, for design and implementation of a broad class of multimedia stream mining topologies. We demonstrate the capabilities of LiD4E using a face detection application that systematically adapts the type of classifier used based on dynamically changing application constraints.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The stochastic channel models typically abstract away the details of the paths that carry energy in the radio channel. While these have been universally acceptable for decades due to their ease of use and reasonable accuracy in most practical cases, the appearance of steerable, narrow-beam antennas in mmWave bands makes the exact path information very valuable, primarily for beam tracking algorithms. Currently, only deterministic channel modeling (e.g. ray tracing) provides the required level of details, but at prohibitive computing cost. This limits the study and design environments for such algorithms to the confines of existing ray tracing data, which is bulky and rarely available for free. In this paper, we consider an approach to stochastic channel modeling that allows to achieve the level of details equivalent to ray tracing, but at a fraction of the computing costs. The proposed approach may be immediately applied to any system operating at 20-100 GHz. It allows the researchers and engineers to perform quick testing of elaborate mmWave MAC and PHY algorithms with a system-level simulation, without having to obtain exhaustive measurement or ray tracing data.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We propose a full processing pipeline to acquire anthropometric measurements from 3D measurements. The first stage of our pipeline is a commercial point cloud scanner. In the second stage, a pre-defined body model is fitted to the captured point cloud. We have generated one male and one female model from the SMPL library. The fitting process is based on non-rigid iterative closest point algorithm that minimizes overall energy of point distance and local stiffness energy terms. In the third stage, we measure multiple circumference paths on the fitted model surface and use a nonlinear regressor to provide the final estimates of anthropometric measurements. We scanned 194 male and 181 female subjects, and the proposed pipeline provides mean absolute errors from 2.5 to 16.0 mm depending on the anthropometric measurement.
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper, we describe Antroposeeni, a mixed reality game designed and developed for mobile devices. Antroposeeni utilizes location-based services, GPS for tracking users and augmented reality techniques for displaying captivating audiovisual content and creating rich experiences. Our demonstration will introduce a pilot version of the game, which encompasses narrative elements of the game mediated through developed media technologies. The goal for the demonstration is to give the conference visitors a chance to test the game in a specifically tailored route close to the conference site. After conducting the pilot we plan to organize a short review regarding the user experience.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The use of 3D video is growing in several fields such as entertainment, military simulations, medical applications. However, the process of recording, transmitting, and processing 3D video is prone to errors thus producing artifacts that may affect the perceived quality. Nowadays a challenging task is the definition of a new metric able to predict the perceived quality with low computational complexity in order to be used in real-time applications. The research in this field is very active due to the complexity of the analysis of the influence of stereoscopic cues. In this paper we present a novel stereoscopic metric based on the combination of relevant features able to predict the subjective quality rating in a more accurate way.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A physiologically relevant environment is essential for successful long-term cell culturing in vitro. Precise control of temperature, one of the most crucial environmental parameters in cell cultures, increases the fidelity and repeatability of the experiments. Unfortunately, direct temperature measurement can interfere with the cultures or prevent imaging of the cells. Furthermore, the assessment of dynamic temperature variations in the cell culture area is challenging with the methods traditionally used for measuring temperature in cell culture systems. To overcome these challenges, we integrated a microscale cell culture environment together with live-cell imaging and a precise local temperature control that is based on an indirect measurement. The control method uses a remote temperature measurement and a mathematical model for estimating temperature at the desired area. The system maintained the temperature at 37±0.3 °C for more than 4 days. We also showed that the system precisely controls the culture temperature during temperature transients and compensates for the disturbance when changing the cell cultivation medium, and presented the portability of the heating system. Finally, we demonstrated a successful long-term culturing of human induced stem cell–derived beating cardiomyocytes, and analyzed their beating rates at different temperatures.
Research output: Contribution to journal › Article › Scientific › peer-review
We extend the internal model principle for systems with boundary control and boundary observation, and construct a robust controller for this class of systems. However, as a consequence of the internal model principle, any robust controller for a plant with infinite-dimensional output space necessarily has infinite-dimensional state space. We proceed to formulate the approximate robust output regulation problem and present a finite-dimensional controller structure to solve it. Our main motivating example is a wave equation on a bounded multidimensional spatial domain with force control and velocity observation at the boundary. In order to illustrate the theoretical results, we construct an approximate robust controller for the wave equation on an annular domain and demonstrate its performance with numerical simulations.
Research output: Contribution to journal › Article › Scientific › peer-review
This paper aims at solving online equality-constrained quadratic programming problem, which is widely encountered in science and engineering, e.g., computer vision and pattern recognition, digital signal processing, and robotics. Recurrent neural networks such as conventional GradientNet and ZhangNet are considered as powerful solvers for such a problem in light of its high computational efficiency and capability of circuit realisation. In this paper, an improved primal recurrent neural network and its electronic implementation are proposed and analysed. Compared to the existing recurrent networks, i.e. GradientNet and ZhangNet, our network can theoretically guarantee superior global exponential convergence. Robustness performance of our such neural model is also analysed under a large model implementation error, with the upper bound of stead-state solution error estimated. Simulation results demonstrate theoretical analysis on the proposed model, which also verify the effectiveness of the proposed model for online equality-constrained quadratic programming.
Research output: Contribution to journal › Article › Scientific › peer-review
The unauthorized propagation of information is an important problem in the Internet, especially because of the increasing popularity of On-line Social Networks. To address this issue, many access control mechanisms have been proposed so far, but there is still a lack of techniques to evaluate the risk of unauthorized flow of information within social networks. This paper introduces a probability-based approach to modeling the likelihood that information propagates from one social network user to users who are not authorized to access it. The approach is demonstrated via an example, to show how it can be applied in practical cases.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Asynchronous telemedicine systems face many challenges related to information security as the patient's sensitive information and data on medicine dosage is transmitted over a network when monitoring patients and controlling asynchronous telemedical IoT devices. This information may be modified or spied on by a malicious adversary. To make asynchronous telemedicine systems more secure, the authors present a proxy-based solution against data modification and spying attacks in web-based telemedical applications. By obfuscating the executable code of a web application and by continuously dynamically changing obfuscation, the authors' solution makes it more difficult for a piece of malware to attack its target. They use a constructive research approach. They characterize the threat and present an outline of a proposed solution. The benefits and limitations of the proposed solution are discussed. Cyber-Attacks targeted at the information related to patient's care are a serious threat in today's telemedicine. If disregarded, these attacks have negative implications on patient safety and quality of care.
Research output: Contribution to journal › Article › Scientific › peer-review
Microservices is an architectural style increasing in popularity. However, there is still a lack of understanding how to adopt a microservice-based architectural style. We aim at characterizing different microservice architectural style patterns and the principles that guide their definition. We conducted a systematic mapping study in order to identify reported usage of microservices and based on these use cases extract common patterns and principles. We present two key contributions. Firstly, we identified several agreed microservice architecture patterns that seem widely adopted and reported in the case studies identified. Secondly, we presented these as a catalogue in a common template format including a summary of the advantages, disadvantages, and lessons learned for each pattern from the case studies. We can conclude that different architecture patterns emerge for different migration, orchestration, storage and deployment settings for a set of agreed principles.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Light field 3D displays represent a major step forward in visual realism, providing glasses-free spatial vision of real or virtual scenes. Applications that capture and process live imagery have to process data captured by potentially tens to hundreds of cameras and control tens to hundreds of projection engines making up the human perceivable 3D light field using a distributed processing system. The associated massive data processing is difficult to scale beyond a specific number and resolution of images, limited by the capabilities of the individual computing nodes. The authors therefore analyze the bottlenecks and data flow of the light field conversion process and identify possibilities to introduce better scalability. Based on this analysis they propose two different architectures for distributed light field processing. To avoid using uncompressed video data all along the processing chain, the authors also analyze how the operation of the proposed architectures can be supported by existing image/video codecs.
Research output: Contribution to journal › Article › Scientific › peer-review
Motivation: One of the most widely used models to analyse genotype-by-environment data is the additive main effects and multiplicative interaction (AMMI) model. Genotype-by-environment data resulting from multi-location trials are usually organized in two-way tables with genotypes in the rows and environments (location-year combinations) in the columns. The AMMI model applies singular value decomposition (SVD) to the residuals of a specific linear model, to decompose the genotype-by-environment interaction (GEI) into a sum of multiplicative terms. However, SVD, being a least squares method, is highly sensitive to contamination and the presence of even a single outlier, if extreme, may draw the leading principal component towards itself resulting in possible misinterpretations and in turn lead to bad practical decisions. Since, as in many other real-life studies the distribution of these data is usually not normal due to the presence of outlying observations, either resulting from measurement errors or sometimes from individual intrinsic characteristics, robust SVD methods have been suggested to help overcome this handicap. Results: We propose a robust generalization of the AMMI model (the R-AMMI model) that overcomes the fragility of its classical version when the data are contaminated. Here, robust statistical methods replace the classic ones to model, structure and analyse GEI. The performance of the robust extensions of the AMMI model is assessed through a Monte Carlo simulation study where several contamination schemes are considered. Applications to two real plant datasets are also presented to illustrate the benefits of the proposed methodology, which can be broadened to both animal and human genetics studies. Availability and implementation: Source code implemented in R is available in the supplementary material under the function r-AMMI.
Research output: Contribution to journal › Article › Scientific › peer-review
Most applications and services rely on central authorities. This introduces a single point of failure to the system. The central authority must be trusted to have data stored by the application available at any given time. More importantly, the privacy of the user depends on the service provider capability to keep the data safe. A decentralized system could be a solution to remove the dependency from a central authority. Moreover, due to the rapid growth of mobile device usage, the availability of decentralization must not be limited only to desktop computers. In this work we aim at studying the possibility to use mobile devices as a decentralized file sharing platform without any central authorities. This was done by implementing Asterism, a peer-to-peer file-sharing mobile application based on the Inter-Planetary File System. We validate the results by deploying and measuring the application network usage and power consumption in multiple different devices. Results show that mobile devices can be used to implement a worldwide distributed file sharing network. However, the file sharing application generated large amounts of network traffic even when no files were shared. This was caused by the chattiness of the protocol of the underlying peer-to-peer network. Consequently, constant network traffic prevented the mobile devices from entering to deep sleep mode. Due to this the battery life of the devices was greatly degraded.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper presents the recent advancements in the control of multiple-degree-of-freedom hydraulic robotic manipulators. A literature review is performed on their control, covering both free-space and constrained motions of serial and parallel manipulators. Stability-guaranteed control system design is the primary requirement for all control systems. Thus, this paper pays special attention to such systems. An objective evaluation of the effectiveness of different methods and the state of the art in a given field is one of the cornerstones of scientific research and progress. For this purpose, the maximum position tracking error |e|max and a performance indicator ρ (the ratio of |e|max with respect to the maximum velocity) are used to evaluate and benchmark different free-space control methods in the literature. These indicators showed that stability-guaranteed nonlinear model based control designs have resulted in the most advanced control performance. In addition to stable closed-loop control, lack of energy efficiency is another significant challenge in hydraulic robotic systems. This paper pays special attention to these challenges in hydraulic robotic systems and discusses their reciprocal contradiction. Potential solutions to improve the system energy efficiency without control performance deterioration are discussed. Finally, for hydraulic robotic systems, open problems are defined and future trends are projected.
Research output: Contribution to journal › Review Article › Scientific › peer-review
In this paper, a contiguous carrier aggregation scheme for the downlink transmissions in an inband full-duplex cellular network is analyzed. In particular, we consider a scenario where the base station transmits over a wider bandwidth than the mobiles, while both parties are still using the same center frequency. As a result, the mobiles must cancel their own self-interference over a wider bandwidth, when compared to a situation where the uplink and downlink frequency bands are symmetric. Furthermore, due to the inherent RF impairments in the mobile devices, nonlinear modeling of the self-interference is required in the digital domain to fully cancel it over the whole reception bandwidth. The feasibility of the proposed scheme is demonstrated with real-life RF measurements, using two different bandwidths. In both of these cases, it is shown that the SI can be attenuated below the receiver noise floor over the whole reception bandwidth.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A novel system for detection and tracking of vehicles from a single car-mounted camera is presented. The core of the system are high-performance vision algorithms: the WaldBoost detector [1] and the TLD tracker [2] that are scheduled so that a real-time performance is achieved. The vehicle monitoring system is evaluated on a new dataset collected on Italian motorways which is provided with approximate ground truth (GT0) obtained from laser scans. For a wide range of distances, the recall and precision of detection for cars are excellent. Statistics for trucks are also reported. The dataset with the ground truth is made public.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Today, the rapid adoption of mobile social networking is changing how and where humans communicate. As a result, in recent years we have been increasingly moving from physical (e.g., face-to-face) to virtual interaction. However, there is also a new emerging category of social applications that take advantage of both worlds, that is, using virtual interaction to enhance physical interaction. This novel form of networking is enabled by D2D communication between/among the laptops, smartphones, and wearables of persons in proximity of each other. Unfortunately, it has remained limited by the fact that most people are simply not aware of the many potential virtual opportunities in their proximity at any given time. This is a result of the very real digital privacy and security concerns surrounding direct communication between stranger devices. Fortunately, these concerns can be mitigated with the help of a centralized trusted entity, such as a cellular service provider, which can not only authenticate and protect the privacy of devices involved into D2D communication, but also facilitate the discovery of device capabilities and their available content. This article offers an extensive research summary behind this type of cellular-assisted D2D communication, detailing the enabling technology and its implementation, relevant usage scenarios, security challenges, and user experience observations from large-scale deployments.
Research output: Contribution to journal › Article › Scientific › peer-review
Automatic word count estimation (WCE) from audio recordings can be used to quantify the amount of verbal communication in a recording environment. One key application of WCE is to measure language input heard by infants and toddlers in their natural environments, as captured by daylong recordings from microphones worn by the infants. Although WCE is nearly trivial for high-quality signals in high-resource languages, daylong recordings are substantially more challenging due to the unconstrained acoustic environments and the presence of near- and far-field speech. Moreover, many use cases of interest involve languages for which reliable ASR systems or even well-defined lexicons are not available. A good WCE system should also perform similarly for low- and high-resource languages in order to enable unbiased comparisons across different cultures and environments. Unfortunately, the current state-of-the-art solution, the LENA system, is based on proprietary software and has only been optimized for American English, limiting its applicability. In this paper, we build on existing work on WCE and present the steps we have taken towards a freely available system for WCE that can be adapted to different languages or dialects with a limited amount of orthographically transcribed speech data. Our system is based on language-independent syllabification of speech, followed by a language-dependent mapping from syllable counts (and a number of other acoustic features) to the corresponding word count estimates. We evaluate our system on samples from daylong infant recordings from six different corpora consisting of several languages and socioeconomic environments, all manually annotated with the same protocol to allow direct comparison. We compare a number of alternative techniques for the two key components in our system: speech activity detection and automatic syllabification of speech. As a result, we show that our system can reach relatively consistent WCE accuracy across multiple corpora and languages (with some limitations). In addition, the system outperforms LENA on three of the four corpora consisting of different varieties of English. We also demonstrate how an automatic neural network-based syllabifier, when trained on multiple languages, generalizes well to novel languages beyond the training data, outperforming two previously proposed unsupervised syllabifiers as a feature extractor for WCE.
Research output: Contribution to journal › Article › Scientific › peer-review
In AvanTomography project, a compact, high performance module was developed for axial positron emission mammography, which can be integrated with X-ray mammography. With its axial crystal orientation, AvanTomography can achieve a uniform spatial resolution and eliminate the parallax error by unambiguously detecting the location of the positron annihilation. Compact design of the module enables a cost and space efficient system for breast screening. Various configurations, plate or full ring, can be obtained by using multiple modules, allowing the screening of axillary and mammary regions with a single scanner position. In this project, a 6-module system was constructed and tested with a 22Na point source. Energy calibration was performed and initial measurements for energy resolution were conducted.
ORG=sgn,0.5
ORG=elt,0.5
INT=sgn,"Zedda, T."
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Multidimensional database cubes are easier to design and use when the dimension attributes and fact table measures are in one-to-many relation-ships in the data warehouse. The anomalies that can arise when users browse a cube that incorporates dimensions with many-to-many relationships are widely documented by practitioners. We categorise many-to-many relationships in terms of their associated design problems and we present two techniques for modelling restricted forms of many-to-many relationships. We demonstrate that the techniques can avoid anomalies and we discuss performance implications.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Purpose – The purpose of this paper is to investigate the extent, drivers, and conditions underlying backshoring in the Finnish manufacturing industry, comparing the results to the wider ongoing relocation of production in the international context. Design/methodology/approach – The survey of 229 Finnish manufacturing firms reveals the background, drivers, and patterns of offshoring and backshoring. Findings – Companies that had transferred their production back to Finland were more commonly in industries with relatively higher technology intensity and they were typically larger than the no-movement companies, and with a higher number of plants. They also reported more commonly having a corporate-wide strategy for guiding production location decisions. Research limitations/implications – Backshoring activity in the small and open economy of Finland seems to be higher compared to earlier studies in larger countries. The findings suggest that there is a transformation in the manufacturing industries with some gradual replacement of labor-intensive and lower technology-intensive industries toward higher technology-intensive industries. Practical implications – Moving production across national borders is one option in the strategies of firms to stay competitive. Companies must carefully consider the relevance of various decision-making drivers when determining strategies for their production networks. Social implications – Manufacturing industries have traditionally been important for employment in the relatively small and open economies of the Nordic countries. From the social perspective, it is important to understand the ongoing transformation and its implications. Originality/value – There are few empirical studies available of the ongoing backshoring movement, utilizing data from company decision makers instead of macroeconomic factors.
Research output: Contribution to journal › Article › Scientific › peer-review
Motivation: Identification of somatic DNA copy number alterations (CNAs) and significant consensus events (SCEs) in cancer genomes is a main task in discovering potential cancer-driving genes such as oncogenes and tumor suppressors. The recent development of SNP array technology has facilitated studies on copy number changes at a genome-wide scale with high resolution. However, existing copy number analysis methods are oblivious to normal cell contamination and cannot distinguish between contributions of cancerous and normal cells to the measured copy number signals. This contamination could significantly confound downstream analysis of CNAs and affect the power to detect SCEs in clinical samples. Results: We report here a statistically principled in silico approach, Bayesian Analysis of COpy number Mixtures (BACOM), to accurately estimate genomic deletion type and normal tissue contamination, and accordingly recover the true copy number profile in cancer cells. We tested the proposed method on two simulated datasets, two prostate cancer datasets and The Cancer Genome Atlas high-grade ovarian dataset, and obtained very promising results supported by the ground truth and biological plausibility. Moreover, based on a large number of comparative simulation studies, the proposed method gives significantly improved power to detect SCEs after in silico correction of normal tissue contamination. We develop a cross-platform open-source Java application that implements the whole pipeline of copy number analysis of heterogeneous cancer tissues including relevant processing steps. We also provide an R interface, bacomR, for running BACOM within the R environment, making it straightforward to include in existing data pipelines.
Research output: Contribution to journal › Article › Scientific › peer-review
Detailed and realistic tree form generators have numerous applications in ecology and forestry. For example, the varying morphology of trees contributes differently to formation of landscapes, natural habitats of species, and eco-physiological characteristics of the biosphere. Here, we present an algorithm for generating morphological tree "clones" based on the detailed reconstruction of the laser scanning data, statistical measure of similarity, and a plant growth model with simple stochastic rules. The algorithm is designed to produce tree forms, i.e., morphological clones, similar (and not identical) in respect to tree-level structure, but varying in fine-scale structural detail. Although we opted for certain choices in our algorithm, individual parts may vary depending on the application, making it a general adaptable pipeline. Namely, we showed that a specific multipurpose procedural stochastic growth model can be algorithmically adjusted to produce the morphological clones replicated from the target experimentally measured tree. For this, we developed a statistical measure of similarity (structural distance) between any given pair of trees, which allows for the comprehensive comparing of the tree morphologies by means of empirical distributions describing the geometrical and topological features of a tree. Finally, we developed a programmable interface to manipulate data required by the algorithm. Our algorithm can be used in a variety of applications for exploration of the morphological potential of the growth models (both theoretical and experimental), arising in all sectors of plant science research.
EXT="Järvenpää, Marko"
Research output: Contribution to journal › Article › Scientific › peer-review
In recent years, halogen bonding has become an important design tool in crystal engineering, supramolecular chemistry and biosciences. The fundamentals of halogen bonding have been studied extensively with high-accuracy computational methods. Due to its non-covalency, the use of triple-zeta (or larger) basis sets is often recommended when studying halogen bonding. However, in the large systems often encountered in supramolecular chemistry and biosciences, large basis sets can make the calculations far too slow. Therefore, small basis sets, which would combine high computational speed and high accuracy, are in great demand. This study focuses on comparing how well density functional theory (DFT) methods employing small, double-zeta basis sets can estimate halogen-bond strengths. Several methods with triple-zeta basis sets are included for comparison. Altogether, 46 DFT methods were tested using two data sets of 18 and 33 halogen-bonded complexes for which the complexation energies have been previously calculated with the high-accuracy CCSD(T)/CBS method. The DGDZVP basis set performed far better than other double-zeta basis sets, and it even outperformed the triple-zeta basis sets. Due to its small size, it is well-suited to studying halogen bonding in large systems.
Research output: Contribution to journal › Article › Scientific › peer-review
A number of high-quality depth imaged-based rendering (DIBR) pipelines have been developed to reconstruct a 3D scene from several images taken from known camera viewpoints. Due to the specific limitations of each technique, their output is prone to artifacts. Therefore, the quality cannot be ensured. To improve the quality of the most critical and challenging image areas, an exhaustive comparison is required. In this paper, we consider three questions of benchmarking the quality performance of eight DIBR techniques on light fields: First, how does the density of original input views affect the quality of the rendered novel views? Second, how does disparity range between adjacent input views impact the quality? Third, how does each technique behave for different object properties? We compared and evaluated the results visually as well as quantitatively (PSNR, SSIM, AD, and VDP2). The results show some techniques outperform others in different disparity ranges. The results also indicate using more views not necessarily results in visually higher quality for all critical image areas. Finally, we have shown a comparison for different scene's complexity such as non-Lambertian objects.
jufoid=71479
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The future of industrial applications is shaped by intelligent moving IoT devices, such as flying drones, advanced factory robots, and connected vehicles, which may operate (semi-)autonomously. In these challenging scenarios, dynamic radio connectivity at high frequencies, augmented with timely positioning-related information, becomes instrumental to improve communication performance and facilitate efficient computation offloading. Our work reviews the main research challenges and reveals open implementation gaps in IIoT applications that rely on location awareness and multi-connectivity in super high and extremely high frequency bands. It further conducts a rigorous numerical investigation to confirm the potential of precise device localization in the emerging IIoT systems. We focus on positioning- aided benefits made available to multi-connectivity IIoT device operation at 28 GHz, which notably improve data transfer rates, communication latency, and the extent of control overhead.
Research output: Contribution to journal › Article › Scientific › peer-review
The internet is evolving into a full-scale distributed service platform, offering a plethora of services from communications to business, entertainment, social connectivity and much more. The range of services and applications offered is diversifying, with new applications constantly emerging. For example, utility-based computing (e.g. HPC and cloud computing) which relies heavily on data-centre resources. These services will be more dynamic and sophisticated, providing a range of complex capabilities, which puts further burden on datacentres, in terms of supporting and managing these services. At the same time, society is becoming acutely aware of the significant energy burden the communications industry, and in particular data-centres, are becoming. With these trends in mind we propose a biologically inspired service framework that supports services which can autonomously carry out management functions. We then apply this framework to address the emerging problem of a sustainable future internet by autonomously migrating services to greener locations.
Research output: Contribution to journal › Article › Scientific › peer-review
Following the huge growth in usage over the last 10 years, the Internet has become a critical business and social tool. In the future however, this popularity will continue to rise, with the Internet evolving into a full scale distributed service platform, offering a plethora of services from communications to business, entertainment and much more. These services will be more dynamic and sophisticated providing a range of complex capabilities. However, this dynamic service environment will lead to overwhelming management problems if not dealt with adequately. At the same time, society is now acutely aware of the significant energy burden the communications industry is becoming. With these two trends in mind we propose a biologically-inspired service framework which supports services intelligently solving a number of management problems. We then as a case study application, use this framework to address the new, emerging problem of a sustainable future internet by migrating services to new, greener locations.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Currently, a large number of activities on Internet redesign are being discussed in the research community. While today's Internet was initially planned as a datagram-oriented communication network among research facilities, it has grown and evolved to accommodate unexpected diversity in services and applications. For the future Internet this trend is anticipated to continue even more. Such developments demand that the architecture of the new-generation Internet be designed in a dynamic, modular, and adaptive way. Features like these can often be observed in biological processes that serve as inspiration for designing new cooperative architectural concepts. Our contribution in this article is twofold. First, unlike previous discussions on biologically inspired network control mechanisms, we do not limit ourselves to a single method, but consider ecosystems and coexisting environments of entities that can cooperate based on biological principles. Second, we illustrate our grand view by not only taking inspiration from biology in the design process, but also sketching a possible way to implement biologically driven control in a future Internet architecture.
Research output: Contribution to journal › Article › Scientific › peer-review
This study investigated the effect of implant thickness and material on deformation and stress distribution within different components of cranial implant assemblies. Using the finite element method, two cranial implants, differing in size and shape, and thicknesses (1, 2, 3 and 4 mm, respectively), were simulated under three loading scenarios. The implant assembly model included the detailed geometries of the mini-plates and micro-screws and was simulated using a sub-modeling approach. Statistical assessments based on the Design of Experiment methodology and on multiple regression analysis revealed that peak stresses in the components are influenced primarily by implant thickness, while the effect of implant material is secondary. On the contrary, the implant deflection is influenced predominantly by implant material followed by implant thickness. The highest values of deformation under a 50 N load were observed in the thinnest (1 mm) Polymethyl Methacrylate implant (Small defect: 0.296 mm; Large defect: 0.390 mm). The thinnest Polymethyl Methacrylate and Polyether Ether Ketone implants also generated stresses in the implants that can potentially breach the materials' yield limit. In terms of stress distribution, the change of implant thickness had a more significant impact on the implant performance than the change of Young's modulus of the implant material. The results indicated that the stresses are concentrated in the locations of fixation; therefore, the detailed models of mini-plates and micro-screws implemented in the finite element simulation provided a better insight into the mechanical performance of the implant-skull system.
Research output: Contribution to journal › Article › Scientific › peer-review
Paper is dedicated to novel bispectrum-based demodulation technique by using triple-channel heterodyning of triplet-signals. Test statistics used for triplet-signals detection and discrimination are evaluated in the form of the bimagnitude peak values. Experimental study of noise immunity in bispectrum-based digital communication system is performed for suggested triple-channel heterodyning technique. Bit error rate (BER) values are computed under additive Gaussian noise influence in radio communication link for wide variations of input signal-to-noise ratio (SNR).
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A task of blind estimation of multiplicative noise (speckle) variance in multi-look images acquired by radars with synthesized aperture array is considered. It is shown that there are several factors affecting accuracy of such estimation. The main of them are spatial correlation of the speckle, complexity of an analyzed image and peculiarities of a method used. Spatial and spectral domain approaches are analyzed. It is shown that for both approaches spatial correlation of the speckle is to be estimated and taken into account. Results for real life TerraSAR-X data are presented as illustrations and for analyzing methods' accuracy.
EXT="Lukin, V. V."
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In the paper, a new method of blind estimation of noise variance in a single highly textured image is proposed. An input image is divided into 8x8 blocks and discrete cosine transform (DCT) is performed for each block. A part of 64 DCT coefficients with lowest energy calculated through all blocks is selected for further analysis. For the DCT coefficients, a robust estimate of noise variance is calculated. Corresponding to the obtained estimate, a part of blocks having very large values of local variance calculated only for the selected DCT coefficients are excluded from the further analysis. These two steps (estimation of noise variance and exclusion of blocks) are iteratively repeated three times. For the verification of the proposed method, a new noise-free test image database TAMPERE17 consisting of many highly textured images is designed. It is shown for this database and different values of noise variance from the set {25, 49, 100, 225}, that the proposed method provides approximately two times lower estimation root mean square error than other methods.
jufoid=84313
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We introduce a content-Adaptive approach to image denoising where the filter design is based on mean opinion scores (MOSs) from preliminary experiments with volunteers who evaluated the quality of denoised image fragments. This allows to tune the filter parameters so to improve the perceptual quality of the output image, implicitly accounting for the peculiarities of the human visual system (HVS). A modification of the BM3D image denoising filter (Dabov et al., IEEE TIP, 2007), namely BM3DHVS, is proposed based on this framework. We show that it yields a higher visual quality than the conventional BM3D. Further, we have also analyzed the MOSs against popular full-reference visual quality metrics such as SSIM (Wang et al., IEEE TIP, 2004), its extension FSIM (Zhang et al., IEEE TIP, 2011), and the noreference IL-NIQE (Zhang et al., IEEE TIP, 2015) over each image fragment. Both the Spearman and the Kendall rank order correlation show that these metrics do not correspond well to the human perception. This calls for new visual quality metrics tailored for the benchmarking and optimization of image denoising methods.
EXT="Danielyan, Aram"
EXT="Lukin, Vladimir"
jufoid=84313
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Recent developments in nano and biotechnology enable promising therapeutic nanomachines (NMs) that operate on inter- or intracellular area of human body. The networks of such therapeutic NMs, body area nanonetworks (BAN 2s), also empower sophisticated nanomedicine applications. In these applications, therapeutic NMs share information to perform computation and logic operations, and make decisions to treat complex diseases. Hence, one of the most challenging subjects for these sophisticated applications is the realization of BAN 2 through a nanoscale communication paradigm. In this article, we introduce the concept of a BAN 2 with molecular communication, where messenger molecules are used as communication carrier from a sender to a receiver NM. The current state of the art of molecular communication and BAN 2 in nanomedicine applications is first presented. Then communication theoretical efforts are reviewed, and open research issues are given. The objective of this work is to introduce this novel and interdisciplinary research field and highlight major barriers toward its realization from the viewpoint of communication theory.
Research output: Contribution to journal › Article › Scientific › peer-review
We present a study of using embodied health information system for developing regions focusing on users not familiar with technology. We designed and developed a health information system with two gesture-based selection techniques: pointing to a screen and touching one's own body part. We evaluated the prototype in user study with 37 semi-literate and literate participants. Our results indicate a clear preference (76%) for touching in the healthcare domain. Based on our observations and user feedback, we present four design guidelines for developing embodied systems for the developing world: designing bodycentric interaction to overcome literacy and technological proficiency barriers, addressing the misconceptions of system behaviors with users not familiar with technology, understanding effects of cultural constraints on interaction, and utilizing interactive virtual avatars to connect with the users.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Contemporary urban environments are in prompt need of the means for intelligent decision- making, where a crucial role belongs to smart video surveillance systems. While existing deployments of stationary monitoring cameras already deliver notable societal benefits, the proposed concept of massive video surveillance over connected vehicles that we contribute in this article may further augment these important capabilities. We therefore introduce the envisioned system concept, discuss its implementation, outline the high-level architecture, and identify major data flows, while also offering insights into the corresponding design and deployment aspects. Our conducted case study confirms the potential of the described crowd sourced vehicular system to effectively complement and eventually surpass even the best of today's static video surveillance setups. We expect that our proposal will become of value and integrate seamlessly into the future Internetof- Things landscape, thus enabling a plethora of advanced urban applications.
Research output: Contribution to journal › Article › Scientific › peer-review
Modularisation, product platforms, product families and product configuration are efficient product structuring tactics in mass customisation. Industry needs descriptions of how the engineering should be done in this context. We suggest that key engineering concepts in this field are partitioning logic, set of modules, interfaces, architecture and configuration knowledge. A literature review reveals that methods consider these concepts partly or with different combinations, but considering all of them is rare. Therefore, a design method known as the Brownfield Process is presented. The method is applied to an industrial case in which the aim was rationalisation of existing product variety towards a modular product family that enables product configuration. We suggest that the method is valuable in cases with similar goals.
Research output: Contribution to journal › Article › Scientific › peer-review
INSULAtE project aims to develop a common protocol for assessment of improving energy efficiency (EE) of dwellings on indoor environmental quality (IEQ) and public health in Europe. So far, measurement data on IEQ parameters (PM, CO, CO2, VOCs, formaldehyde, NO2, radon, T and RH) and questionnaire data from occupants were collected from 16 multifamily buildings (94 apartments) in Finland and 20 (96 apartments) in Lithuania before renovation. Most parameters were within recommended limits; however, the data revealed different baselines (before renovation) for each country both in terms of the IEQ parameters and the respondents' satisfaction regarding their residence and indoor air quality. Post renovation data (from one building in each country) showed potential changes in the measured parameters, while further analyses are needed once the data have been collected. The results of this project will be used in developing guidance and support the implementation of the related policies.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Business and academic research frequently highlights the power of electronic word of mouth, relying on the knowledge that online customer ratings and reviews influence consumer decision making. Numerous studies in different disciplines have been conducted to examine the effectiveness of electronic word of mouth communication. Previously, typically small sample studies suggest that positive electronic word of mouth increases sales and that the effects depend on the volume and valence of reviews and ratings. This study’s contribution lies in testing the relationship between electronic word of mouth and the sales of applications in a mobile application ecosystem (Google Play) with an extensive dataset (over 260 million customer ratings; 18 months). The results show that higher values of valence of customer ratings correlate statistically significantly with higher sales. The volume of ratings correlates positively with sales in the long term but negatively in the short term. Furthermore, the relationship between electronic word of mouth and sales seems to be more important when the price of the application increases. The findings also underline the importance of the choice of a measurement period in studies.
EXT="Aarikka-Stenroos, Leena"
EXT="Hyrynsalmi, Sami"
Versio ja lupa ok 12.1.2016, lupa annettu lomakkeella KK
Research output: Contribution to journal › Article › Scientific › peer-review
Industrial automation deployments constitute challenging environments where moving IoT machines may produce high-definition video and other heavy sensor data during surveying and inspection operations. Transporting massive contents to the edge network infrastructure and then eventually to the remote human operator requires reliable and high-rate radio links supported by intelligent data caching and delivery mechanisms. In this work, we address the challenges of contents dissemination in characteristic factory automation scenarios by proposing to engage moving industrial machines as D2D caching helpers. With the goal of improving the reliability of high-rate mmWave data connections, we introduce alternative contents dissemination modes and then construct a novel mobility-aware methodology that helps develop predictive mode selection strategies based on the anticipated radio link conditions. We also conduct a thorough system-level evaluation of representative data dissemination strategies to confirm the benefits of predictive solutions that employ D2D-enabled collaborative caching at the wireless edge to lower contents delivery latency and improve data acquisition reliability.
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper, we explore how to better integrate virtual reality viewing to a smartphone. We present novel designs for casual (short-term) immersive viewing of spatial and 3D content, such as augmented and virtual reality, with smartphones. Our goal is to create a simple and low-cost casual-viewing design which could be retrofitted and eventually be embedded into smartphones, instead of using larger spatial viewing accessories. We explore different designs and implemented several prototypes. One prototype uses thin and light near-to-eye optics with a smartphone display, thus providing the user with the functionality of a large, high-resolution virtual display. Our designs also enable 3D user interfaces. Easy interaction through various gestures and other modalities is possible by using the inertial and other sensors and camera of the smartphone. Our preliminary concepts are a starting point for exploring useful constructions and designs for such usage.
EXT="Rakkolainen, Ismo"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Cell cultivation devices that mimic the complex microenvironment of cells in the human body are of high importance for the future of stem cell research. This paper introduces a prototype of an electromechanical stimulation platform as a modular expansion of an earlier developed mechanical stimulation device for stem cell research. A solution processable ink from PEDOT:PSS and graphene is studied as a suitable material for fabrication of transparent stretchable electrodes. Challenges of electrode integration on a flexible membrane using this material are critically discussed.
INT=ase,"Viehrig, Marlitt"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Context: Global software development (GSD), although now a norm in the software industry, carries with it enormous challenges mostly regarding communication and coordination. Aforementioned challenges are highlighted when there is a need to transfer knowledge between sites, particularly when software artifacts assigned to different sites depend on each other. The design of the software architecture and associated task dependencies play a major role in reducing some of these challenges. Objective: The current literature does not provide a cohesive picture of how the distributed nature of software development is taken into account during the design phase: what to avoid, and what works in practice. The objective of this paper is to gain an understanding of software architecting in the context of GSD, in order to develop a framework of challenges and solutions that can be applied in both research and practice. Method: We conducted a systematic literature review (SLR) that synthesises (i) challenges which GSD imposes on software architecture design, and (ii) recommended practices to alleviate these challenges. Results: We produced a comprehensive set of guidelines for performing software architecture design in GSD based on 55 selected studies. Our framework comprises nine key challenges with 28 related concerns, and nine recommended practices, with 22 related concerns for software architecture design in GSD. These challenges and practices were mapped to a thematic conceptual model with the following concepts: Organization (Structure and Resources), Ways of Working (Architecture Knowledge Management, Change Management and Quality Management), Design Practices, Modularity and Task Allocation. Conclusion: The synthesis of findings resulted in a thematic conceptual model of the problem area, a mapping of the key challenges to practices, and a concern framework providing concrete questions to aid the design process in a distributed setting. This is a first step in creating more concrete architecture design practices and guidelines.
Research output: Contribution to journal › Article › Scientific › peer-review
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Digital Image Correlation (DIC) was used for studying the anisotropic behavior of the thin walled right ventricle of the human heart. Strains measured with Speckle Tracking Echocardiography (STE) were compared with the DIC data. Both DIC and STE were used to measure longitudinal strains of the right ventricle in the beginning of an open-heart surgery as well as after the cardiopulmonary bypass. Based on the results, the maximum end-systolic strains obtained with the DIC and STE change similarly during the surgery with less than 10% difference. The difference is largely due to the errors in matching the longitudinal direction in the two methods, sensitivity of the measurement to the positioning of the virtual extensometer of in both STE and DIC, and physiological difference of the measurements as the DIC measures the top surface of the heart whereas the STE obtains the data from below. The anisotropy of the RV was measured using full field principal strains acquired from the DIC displacement fields. The full field principal strains cover the entire region of interest instead of just two points as the virtual extensometer approach used by the STE. The principal strains are not direction dependent measures, and therefore are more independent of the anatomy of the patient and the exact positioning of the virtual strain gage or the STE probe. The results show that the longitudinal strains alone are not enough to fully characterize the behavior of the heart, as the deformation of the heart can be very anisotropic, and the anisotropy changes during the surgery, and from patient to patient.
dupl=51243005
Research output: Contribution to journal › Article › Scientific › peer-review
Motivation: Single-molecule measurements of live Escherichia coli transcription dynamics suggest that this process ranges from sub- to super-Poissonian, depending on the conditions and on the promoter. For its accurate quantification, we propose a model that accommodates all these settings, and statistical methods to estimate the model parameters and to select the relevant components. Results: The new methodology has improved accuracy and avoids overestimating the transcription rate due to finite measurement time, by exploiting unobserved data and by accounting for the effects of discrete sampling. First, we use Monte Carlo simulations of models based on measurements to show that the methods are reliable and offer substantial improvements over previous methods. Next, we apply the methods on measurements of transcription intervals of different promoters in live E. coli, and show that they produce significantly different results, both in low- and high-noise settings, and that, in the latter case, they even lead to qualitatively different results. Finally, we demonstrate that the methods can be generalized for other similar purposes, such as for estimating gene activation kinetics. In this case, the new methods allow quantifying the inducer uptake dynamics as opposed to just comparing them between cases, which was not previously possible. We expect this new methodology to be a valuable tool for functional analysis of cellular processes using single-molecule or single-event microscopy measurements in live cells.
Research output: Contribution to journal › Article › Scientific › peer-review
The use of highly directional antenna radiation patterns for both the access point (AP) and the user equipment (UE) in the emerging millimeter-wave (mmWave)-based New Radio (NR) systems is inherently beneficial for unicast transmissions by providing an extension of the coverage range and eventually resulting in lower required NR AP densities. On the other hand, efficient resource utilization for serving multicast sessions demands narrower antenna directivities, which yields a trade-off between these two types of traffic that eventually affects the system deployment choices. In this work, with the tools from queuing theory and stochastic geometry, we develop an analytical framework capturing both the distance- and traffic-related aspects of the NR AP serving a mixture of multicast and unicast traffic. Our numerical results indicate that the service process of unicast sessions is severely compromised when (i) the fraction of unicast sessions is significant, (ii) the spatial session arrival intensity is high, or (iii) the service time of the multicast sessions is longer than that of the unicast sessions. To balance the multicast and unicast session drop probabilities, an explicit prioritization is required. Furthermore, for a given fraction of multicast sessions, lower antenna directivity at the NR AP characterized by a smaller NR AP inter-site distance (ISD) leads to a better performance in terms of multicast and unicast session drop probabilities. Aiming to increase the ISD, while also maintaining the drop probability at the target level, the serving of multicast sessions is possible over the unicast mechanisms, but it results in worse performance for the practical NR AP antenna configurations. However, this approach may become feasible as arrays with higher numbers of antenna elements begin to be available. Our developed mathematical framework can be employed to estimate the parameters of the NR AP when handling a mixture of multicast and unicast sessions as well as drive a lower bound on the density of the NR APs, which is needed to serve a certain mixture of multicast and unicast traffic types with their target performance requirements.
Research output: Contribution to journal › Article › Scientific › peer-review
A visual data flow language (VDFL) allows graphical presentation of a computer program in the form of a directed graph, where data tokens travel through the arcs of the graph, and the vertices present e.g. the input token streams, calculations, comparisons, and conditionals. Amongst their benefits, VDFLs allow parallel computing and they are presumed to improve the quality of programming due to their intuitive readability. Thus, they are also suitable for computing education. However, the token-based computational model allowing parallel processing may make the programs more complicated than what they look. We propose a method for checking properties of VDFL programs using finite state processes (FSPs) using a commonly available labelled transition system analyser (LTSA) tool. The method can also be used to study different VDFL programming constructs for development or re-design of VDFLs. For our method, we have implemented a compiler that compiles a textual representation of a VDFL into FSPs.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A chip-to-package wireless power transfer concept is applied to MMIC and antennas on LCP substrate is presented. Electromagnetic simulations show the feasibility of the proposed approach. As a benchmarking topology at the working frequency of 35.4 GHz, an Archimedean spiral antenna matched to a heterogeneous transformer, which couples the power received by the antenna to the chip, has been simulated. Transistor level circuit simulations are also proposed for the LNA and the detector, which together will constitute the system-on-chip (SoC) radiometer to be integrated in the LCP-SoP.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The n-Propyl Propionate (ProPro) is a compound that has several possible industrial applications. However, the current production route of this component presents several problems, such as the downstream purification. In this way, chromatographic separation could be an alternative solution to the downstream purification. In this work experimental studies of the ProPro reaction system separation in a chromatographic fixed bed unit packed with Amberlyst 46 were performed. The adsorption equilibrium isotherms and the corresponding Langmuir model parameters were determined. A phenomenological model to represent the process was developed and validated through the experimental data. Meanwhile, it is proposed the characterization of the uncertainties of all steps and its extension to the model prediction, which allowed to estimate the model parameters with a reduced number of experiments, when compared with other reports in the literature; nevertheless, the final results lead to a statistically more reliable model.
Research output: Contribution to journal › Article › Scientific › peer-review
We present a binary graph classifier (BGC) which allows to classify large, unweighted, undirected graphs. This classifier is based on a local decomposition of the graph for each node in generalized trees. The obtained trees, forming the tree set of the graph, are then pairwise compared by a generalized tree-similarity-algorithm (GTSA) and the resulting similarity scores determine a characteristic similarity distribution of the graph. Classification in this context is defined as mutual consistency for all pure and mixed tree sets and their resulting similarity distributions in a graph class. We demonstrate the application of this method to an artificially generated data set and for data from microarray experiments of cervical cancer.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, a novel nonlinear subspace learning technique for class-specific data representation is proposed. A novel data representation is obtained by applying nonlinear class-specific data projection to a discriminant feature space, where the data belonging to the class under consideration are enforced to be close to their class representation, while the data belonging to the remaining classes are enforced to be as far as possible from it. A class is represented by an optimized class vector, enhancing class discrimination in the resulting feature space. An iterative optimization scheme is proposed to this end, where both the optimal nonlinear data projection and the optimal class representation are determined in each optimization step. The proposed approach is tested on three problems relating to human behavior analysis: Face recognition, facial expression recognition, and human action recognition. Experimental results denote the effectiveness of the proposed approach, since the proposed class-specific reference discriminant analysis outperforms kernel discriminant analysis, kernel spectral regression, and class-specific kernel discriminant analysis, as well as support vector machine-based classification, in most cases.
Research output: Contribution to journal › Article › Scientific › peer-review
Recent advances in image-based object recognition have exploited object proposals to speed up the detection process by reducing the search space. In this paper, we present a novel idea that utilizes true objectness and semantic image filtering (retrieved within the convolutional layers of a Convolutional Neural Network) to propose effective region proposals. Information learned in fully convolutional layers is used to reduce the number of proposals and enhance their localization by producing highly accurate bounding boxes. The greatest benefit of our method is that it can be integrated into any existing approach exploiting edge-based objectness to achieve consistently high recall across various intersection over union thresholds. Experiments on PASCAL VOC 2007 and ImageNet datasets demonstrate that our approach improves two existing state-of-the-art models with significantly high margins and pushes the boundaries of object proposal generation.
Research output: Contribution to journal › Article › Scientific › peer-review
Motto ‘hands-on exercises are the most efficient means to learn coding’ prevails the design of Code ABC hackathons. Hackathons are emergent and challenge-based ways to engage participants. The participants of this study comprise Finnish comprehensive schoolteachers that are willing to develop their coding skills. Perceiving hackathon participants as players grants employing the same motivation and engagement theories that game researchers and developers exploit in developing serious games. This paper represents two subsequent Code ABC hackathon iterations, the autumn of 2017 and the spring of 2018. The development of hackathon challenges was based on the previous semester-long Code ABC MOOC exercises field-tested since autumn 2015. As the data, we exploit the returned work from participants (multiple-choice questions, open-ended responses, programming exercises, N = 10, the first, N = 30, the second) and the instructors’ reflections (N = 5). In particular, we address the topics considered challenging, engaging, and the lessons learned; the analysis utilizes mixed methods. Results show that the hackathons were almost too demanding yet engaging; however, their full potential was left unexploited.
jufoid=85162
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Future home networks are expected to become extremely sophisticated, yet only the most technically adept persons are equipped with skills to manage them. In this paper, we provide a novel solution as to how complex smart home networks can be collaboratively managed with the assistance of operators and third party experts. Our solution rests in separating the management and control functionalities of the home access points and routers, away from the actual connectivity, traffic forwarding and routing operations within the home network. By so doing, we present a novel REST-based architecture in which the management of the home network can be hosted in an entirely separate, external cloud-based infrastructure, which models the network within the home as a resource graph.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, we study the importance of pretraining for the generalization capability in the color constancy problem. We propose two novel approaches based on convolutional autoencoders: an unsupervised pre-training algorithm using a fine-tuned encoder and a semi-supervised pre-training algorithm using a novel composite-loss function. This enables us to solve the data scarcity problem and achieve competitive, to the state-of-the-art, results while requiring much fewer parameters on ColorChecker RECommended dataset. We further study the over-fitting phenomenon on the recently introduced version of INTEL-TUT Dataset for Camera Invariant Color Constancy Research, which has both field and non-field scenes acquired by three different camera models.
EXT="Iosifidis, Alexandros"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The divergence similarity between two color images is presented based on the Jensen-Shannon divergence to measure the color-distribution similarity. Subjective assessment experiments were developed to obtain mean opinion scores (MOS) of test images. It was found that the divergence similarity and MOS values showed statistically significant correlations.
JUFOID=72850
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We present a new image enhancement algorithm based on combined local and global image processing. The basic idea is to apply α-rooting image enhancement approach for different image blocks. For this purpose, we split image in moving windows on disjoint blocks with different size (8 by 8, 16 by 16, 32 by 32 and, i.e.). The parameter alfa for every block driven through optimization of measure of enhancement (EME). The resulting image is a weighted mean of all processing blocks. This strategy for image enhancement allows getting more contrast image with the following properties: irregular lighting and brightness gradient. Some experimental results are presented to illustrate the performance of the proposed algorithm.
jufoid=84313
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The problem of increasing efficiency of blind image quality assessment is considered. No-reference image quality metrics both independently and as components of complex image processing systems are employed in various application areas where images are the main carriers of information. Meanwhile, existing no-reference metrics have a significant drawback characterized by a low adequacy to image perception by human visual system (HVS). Many well-known no-reference metrics are analyzed in our paper for several image databases. A method of combining several no-reference metrics based on artificial neural networks is proposed based on multi-database verification approach. The effectiveness of the proposed approach is confirmed by extensive experiments.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
A task of assessing full-reference visual quality of images is considered. Correlation between the obtained array of mean opinion scores (MOS) and the corresponding array of given metric values allows characterizing correspondence of a considered metric to HVS. For the largest openly available database TID2013 intended for metric verification, a Spearman correlation is about 0.85 for the best existing HVS-metrics. One simple way to improve an efficiency of assessing visual quality of images is to combine several metrics. Our work addresses a possibility of using neural networks for the aforementioned purpose. As leaning data, we have used metric sets for images of the database TID2013 that are employed as the network inputs. Randomly selected half of 3000 images of the database TID2013 has been used at the learning stage whilst other half have been exploited for assessing quality of neural network based HVS-metric. Six metrics "cover" well all types of distortions: FSIMc, PSNR-HMA, PSNR-HVS, SFF, SR-SIM, and VIF, have been selected. As the result of NN learning, the Spearman correlation between the NN output and the MOS for the verification set of database TID2013 reaches 0.93 for the best configuration of NN. This is considerably better than for any particular metric employed as an input (FSIMc is the best among them). Analysis of the designed metric efficiency is carried out, its advantages and drawbacks are demonstrated.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Background: Molecular descriptors have been extensively used in the field of structure-oriented drug design and structural chemistry. They have been applied in QSPR and QSAR models to predict ADME-Tox properties, which specify essential features for drugs. Molecular descriptors capture chemical and structural information, but investigating their interpretation and meaning remains very challenging.Results: This paper introduces a large-scale database of molecular descriptors called COMMODE containing more than 25 million compounds originated from PubChem. About 2500 DRAGON-descriptors have been calculated for all compounds and integrated into this database, which is accessible through a web interface at http://commode.i-med.ac.at.
Research output: Contribution to journal › Article › Scientific › peer-review
Wearable wireless devices are very likely to soon move into the mainstream of our society, led by the rapidly expanding multibillion dollar health and fitness markets. Should wearable technology sales follow the same pattern as those of smartphones and tablets, these new devices (a.k.a. wearables) will see explosive growth and high adoption rates over the next five years. It also means that wearables will need to become more sophisticated, capturing what the user sees, hears, or even feels. However, with an avalanche of new wearables, we will need to find ways to supply them with low-latency highspeed data connections to enable truly demanding use cases such as augmented reality. This is particularly true for high-density wearable computing scenarios, such as public transportation, where existing wireless technology may have difficulty supporting stringent application requirements. In this article, we summarize our recent progress in this area with a comprehensive review of current and emerging connectivity solutions for high-density wearable deployments, their relative performance, and open communication challenges.
Research output: Contribution to journal › Article › Scientific › peer-review
Motivation: Digital pathology enables new approaches that expand beyond storage, visualization or analysis of histological samples in digital format. One novel opportunity is 3D histology, where a three-dimensional reconstruction of the sample is formed computationally based on serial tissue sections. This allows examining tissue architecture in 3D, for example, for diagnostic purposes. Importantly, 3D histology enables joint mapping of cellular morphology with spatially resolved omics data in the true 3D context of the tissue at microscopic resolution. Several algorithms have been proposed for the reconstruction task, but a quantitative comparison of their accuracy is lacking. Results: We developed a benchmarking framework to evaluate the accuracy of several free and commercial 3D reconstruction methods using two whole slide image datasets. The results provide a solid basis for further development and application of 3D histology algorithms and indicate that methods capable of compensating for local tissue deformation are superior to simpler approaches.
Research output: Contribution to journal › Article › Scientific › peer-review
Background: Over the last few years transcriptome sequencing (RNA-Seq) has almost completely taken over microarrays for high-throughput studies of gene expression. Currently, the most popular use of RNA-Seq is to identify genes which are differentially expressed between two or more conditions. Despite the importance of Gene Set Analysis (GSA) in the interpretation of the results from RNA-Seq experiments, the limitations of GSA methods developed for microarrays in the context of RNA-Seq data are not well understood. Results: We provide a thorough evaluation of popular multivariate and gene-level self-contained GSA approaches on simulated and real RNA-Seq data. The multivariate approach employs multivariate non-parametric tests combined with popular normalizations for RNA-Seq data. The gene-level approach utilizes univariate tests designed for the analysis of RNA-Seq data to find gene-specific -values and combines them into a pathway -value using classical statistical techniques. Our results demonstrate that the Type I error rate and the power of multivariate tests depend only on the test statistics and are insensitive to the different normalizations. In general standard multivariate GSA tests detect pathways that do not have any bias in terms of pathways size, percentage of differentially expressed genes, or average gene length in a pathway. In contrast the Type I error rate and the power of gene-level GSA tests are heavily affected by the methods for combining -values, and all aforementioned biases are present in detected pathways. Conclusions: Our result emphasizes the importance of using self-contained non-parametric multivariate tests for detecting differentially expressed pathways for RNA-Seq data and warns against applying gene-level GSA tests, especially because of their high level of Type I error rates for both, simulated and real data.
Research output: Contribution to journal › Article › Scientific › peer-review
Illusory vibrotactile movement can be used to provide directional tactile information on the skin. Our research question was how the presentation method affects the perception of vibrotactile movement. Illusion of vibrotactile mediolateral movement was elicited to a left dorsal forearm to investigate cognitive and emotional experiences to vibrotactile stimulation. Eighteen participants were presented with stimuli delivered to a linearly aligned row of three vibrotactile actuators. Three presentation methods were used-saltation, amplitude modulation, and a hybrid method-to form 12 distinct patterns of movement. First, the stimuli were compared pairwise using a two-alternative forced-choice procedure (same-different judgments). Second, the stimuli were rated using three nine-point bipolar scales measuring the continuity, pleasantness, and arousal of each stimulus. The stimuli presented with the amplitude modulation method were rated significantly more continuous and pleasant, and less arousing. Strong correlations between the cognition-related scale of continuity and the emotion-related scales of pleasantness and arousal were found: More continuous stimuli were rated more pleasant and less arousing.
Research output: Contribution to journal › Article › Scientific › peer-review
Research on the indicators of student performance in introductory programming courses has traditionally focused on individual metrics and specific behaviors. These metrics include the amount of time and the quantity of steps such as code compilations, the number of completed assignments, and metrics that one cannot acquire from a programming environment. However, the differences in the predictive powers of different metrics and the cross-metric correlations are unclear, and thus there is no generally preferred metric of choice for examining time on task or effort in programming. In this work, we contribute to the stream of research on student time on task indicators through the analysis of a multi-source dataset that contains information about students' use of a programming environment, their use of the learning material as well as self-reported data on the amount of time that the students invested in the course and per-Assignment perceptions on workload, educational value and difficulty. We compare and contrast metrics from the dataset with course performance. Our results indicate that traditionally used metrics from the same data source tend to form clusters that are highly correlated with each other, but correlate poorly with metrics from other data sources. Thus, researchers should utilize multiple data sources to gain a more accurate picture of students' learning.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Terahertz pulse time-domain holography (THz PTDH) is an ultimate technique both for the measurement of object optical properties and broadband wavefront sensing. However, THz PTDH has valuable restriction connected with low signal-to-noise ratio which becomes a serious issue in coherent measurements. This noise problem could be solved by filtering with use of modern block-matching algorithms based on nonlocal similarity of small patches of images existing in investigated objects. Here we present the study on the use of denoising algorithms applied for hyperspectral THz data in the spatio-temporal and spatial-spectral domain. We provide a numerical simulation of denoising in case of broadband uniform topologically charged (BUTCH) beam of pulsed THz radiation.
jufoid=71479
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper we present a complex elevator system design structure matrix (DSM). The DSM is created with system experts to enable solving of complex system development problems via a product DSM. This data is created to be used as a case study in a DSM design sprint. It was created to show the diversity of findings that can be ascertained from a single DSM matrix. In the spirit of open science, we present both the DSM and the design sprint to enable other researched to replicate, reproduce or otherwise build on the same source of data.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
One of the main approaches to additional lossless compression of JPEG images is decoding of quantized values of discrete cosine transform (DCT) coefficients and further more effective recompression of the coefficients. Values of amplitudes of DCT coefficients are highly correlated and it is possible to effectively compress them. At the same time, signs of DCT coefficients, which occupy up to 20% of compressed image, are often considered unpredictable. In the paper, a new and effective method for compression of signs of quantized DCT coefficients is proposed. The proposed method takes into account both correlation between DCT coefficients of the same block and correlation between DCT coefficients of neighbor blocks. For each of 64 DCT coefficients, positions of 3 reference coefficients inside the block are determined and stored in the compressed file. Four reference coefficients with fixed positions are used from the neighbor blocks. For all reference coefficients, 15 frequency models to predict signs of a given coefficient are used. All 7 probabilities (that the sign is negative) are mixed by logistic mixing. For test set of JPEG images, we show that the proposed method allows compressing signs of DCT coefficients by 1.1 ⋯ 1.3 times, significantly outperforming nearest analogues.
jufoid=84313
EXT="Lukin, Vladimir"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this research, two radiofrequency identification (RFID) antenna sensor designs are tested for compressive strain measurement. The first design is a passive (battery-free) folded patch antenna sensor with a planar dimension of 61mm × 69mm. The second design is a slotted patch antenna sensor, whose dimension is reduced to 48mm × 44mm by introducing slots on antenna conducting layer to detour surface current path. A three-point bending setup is fabricated to apply compression on a tapered aluminum specimen mounted with an antenna sensor. Mechanics-electromagnetics coupled simulation shows that the antenna resonance frequency shifts when each antenna sensor is under compressive strain. Extensive compression tests are conducted to verify the strain sensing performance of the two sensors. Experimental results confirm that the resonance frequency of each antenna sensor increases in an approximately linear relationship with respect to compressive strain. The compressive strain sensing performance of the two RFID antenna sensors, including strain sensitivity and determination coefficient, is evaluated based on the experimental data.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In-line lensless holography is considered with a random phase modulation at the object plane. The forward wavefront propagation is modelled using the Fourier transform with the angular spectrum transfer function. The multiple intensities (holograms) recorded by the sensor are random due to the random phase modulation and noisy with Poissonian noise distribution. It is shown by computational experiments that high-accuracy reconstructions can be achieved with resolution going up to the two thirds of the wavelength. With respect to the sensor pixel size it is a super-resolution with a factor of 32. The algorithm designed for optimal superresolution phase/amplitude reconstruction from Poissonian data is based on the general methodology developed for phase retrieval with a pixel-wise resolution in V. Katkovnik, "Phase retrieval from noisy data based on sparse approximation of object phase and amplitude", http://www.cs.tut.fi/∼lasip/DDT/index3.html.
jufoid=71479
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We demonstrate computation of total dynamic multipole polarizabilities using path-integral Monte Carlo method (PIMC). The PIMC approach enables accurate thermal and nonadiabatic mixing of electronic, rotational, and vibrational degrees of freedom. Therefore, we can study the thermal effects, or lack thereof, in the full multipole spectra of the chosen one- and two-electron systems: H, Ps, He, Ps2, H2, and HD+. We first compute multipole-multipole correlation functions up to octupole order in imaginary time. The real-domain spectral function is then obtained by analytical continuation with the maximum entropy method. In general, sharpness of the active spectra is limited, but the obtained off-resonant polarizabilities are in good agreement with the existing literature. Several weak and strong thermal effects are observed. Furthermore, the polarizabilities of Ps2 and some higher multipole and higher frequency data have not been published before. In addition, we compute isotropic dispersion coefficients C6, C8, and C10 between pairs of species using the simplified Casimir-Polder formulas.
Research output: Contribution to journal › Article › Scientific › peer-review
The storage battery is the most important and decisive component in a telecom DC UPS system, determining its reliability and availability performance. The deployment of valve-regulated lead-acid (VRLA) batteries into telecom networks started an era of unbelievable problems and gradual deterioration of credibility. Battery condition monitoring has become a necessity, and not just a way to boost reliability. The behavior of a storage battery resembles, in many respects, that of human behavior, making it difficult or almost impossible to draw viable conclusions or to predict future behavior from the data obtainable. This paper makes a survey of available methods and problems to assess the state-of-health of VRLA batteries and proposes a suitable method based on soft computing principles.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The Manufacturing Enterprise Solutions Association (MESA) provided the abstract and general definition of the Manufacturing Execution Systems (MES). A dedicated function has been reserved for the data collection activities. In this matter, the Cloud Collaborative Manufacturing Networks (C2NET) project tends to provide a cloud based platform for hosting the interactions of the supply chain in a collaborative network. Within the architecture of the C2NET project, a Data Collection Framework (DCF) is designed to fulfill the function of data collection. This allows the companies to provide their data, which can be both enterprise and Internet of Things (IoT) devices type of data to the platform for further use. The collection of the data is achieved by a specific third party application, i.e., the Legacy System Hub (LSH). This research work presents the approach of configuring and visualizing the data resources in the C2NET platform. This approach employs the web-based applications and the help of the LSH. This permits the C2NET platform to adapt to any kind of third party application, which manipulates enterprise data, following the generic and flexible solution of this approach.
INT=aut,"Jose, L."
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This study presents preliminary work in the detection of shape features through conformal mapping of the surface of the human scapula. The approach employs Ricci-flow based uniformization of the surface topology towards its canonical domain a sphere. The resulting evolution of the surface generates a distribution of conformal factors over the surface. The local maxima and minima of this distributed parameter are used as candidates for representations of local shape features. The procedure was tested on 5 scapulae and the detected features were compared to manual annotations 16 on each scapula. 3 out of 16 landmarks were closely approximated by the detected features with an average distance less than 2.1 mm. Visual inspection reveals other detected features that show apparent consistency in their anatomical location on the surface of the scapula.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The atmospheric window at 3 to 5 μm is one of the most important spectral regions for molecular spectroscopy. This region accommodates strong fundamental vibrational spectra of several interesting molecules, including species relevant for air quality monitoring, medical diagnostics, and fundamental research. These applications require excellent spectroscopic sensitivity and selectivity. For example, atmospheric research often needs precise quantification of trace gas fractions of down to the parts-per-trillion level (10-12), with the capability of resolving individual spectral features of different molecular compounds. This sets stringent requirements for the light source of the spectrometer in terms of output power, noise, and linewidth. In addition, the wavelength tuning range of the light source needs to be large, preferably over the entire atmospheric window, in order to enable measurements of molecular fingerprints of several compounds. Continuous-wave optical parametric oscillators (CW-OPOs) are one of the few light sources that have the potential of combining all these favorable characteristics. This contribution summarizes our progress in the development of CW-OPOs, with an emphasis on precise frequency control methods for high-resolution molecular spectroscopy. Examples of new applications enabled by the advanced CW-OPO technologies will be presented. These examples include a demonstration of world-record detection sensitivity in trace gas analysis, as well as the first characterization of infrared spectrum of radioactive methane 14CH4.
jufoid=71479
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We propose shearlet decomposition based light field (LF) reconstruction and filtering techniques for mitigating artifacts in the visualized contents of 3D multiview displays. Using the LF reconstruction capability, we first obtain the densely sampled light field (DSLF) of the scene from a sparse set of view images. We design the filter via tiling the Fourier domain of epipolar image by shearlet atoms that are directionally and spatially localized versions of the desired display passband. In this way, it becomes possible to process the DSLF in a depth-dependent manner. That is, the problematic areas in the 3D scene that are outside of the display depth of field (DoF) can be selectively filtered without sacrificing high details in the areas near the display, i.e. inside the DoF. The proposed approach is tested on a synthetic scene and the improvements achieved by means of the quality of the visualized content are verified, where the visualization process is simulated using a geometrical optics model of the human eye.
jufoid=84313
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Recent device shipment trends strongly indicate that the number of Web-enabled devices other than PCs and smart phones are growing rapidly. Marking the end of the dominant era of these two traditional device categories, people will soon commonly use various types of Internet-connected devices in their daily lives, where no single device will dominate. Since today's devices are mostly standalone and only stay in sync in limited ways, new approaches are needed for mastering the complexity arising from the world of many types of devices, created by different manufacturers and implementing competing standards. Today, the most common denominator for dealing with the differences is using clouds. Unfortunately, however, while the cloud is well suited for numerous activities, there are also serious limitations, especially when considering systems that consist of numerous, battery-powered computing devices that have limited connectivity. In this paper, we provide an insight to our research where totally cloud-based orchestration of cooperating devices is partitioned into more local actions, where constant communication with the cloud backend can be at least partially omitted.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Purpose: The current study aims to investigate if different measures related to online psychosocial well-being and online behavior correlate with social media fatigue.
Design/methodology/approach: To understand the antecedents and consequences of social media fatigue, the stressor-strain-outcome (SSO) framework is applied. The study consists of two cross-sectional surveys that were organized with young-adult students. Study A was conducted with 1,398 WhatsApp users (aged 19 to 27 years), while Study B was organized with 472 WhatsApp users (aged 18 to 23 years).
Findings: Intensity of social media use was the strongest predictor of social media fatigue. Online social comparison and self-disclosure were also significant predictors of social media fatigue. The findings also suggest that social media fatigue further contributes to a decrease in academic performance.
Originality/value: This study builds upon the limited yet growing body of literature on a theme highly relevant for scholars, practitioners as well as social media users. The current study focuses on examining different causes of social media fatigue induced through the use of a highly popular mobile instant messaging app, WhatsApp. The SSO framework is applied to explore and establish empirical links between stressors and social media fatigue.
Research output: Contribution to journal › Article › Scientific › peer-review
This research studies the performance of a battery-free wireless antenna sensor for measuring crack propagation. In our previous work, a battery-free folded patch antenna was designed for wireless strain and crack sensing. When experiencing deformation, the antenna shape changes, causing shift in electromagnetic resonance frequency of the antenna. The wireless interrogation system utilizes the principle of electromagnetic backscattering and adopts off-the-shelf 900MHz radiofrequency identification (RFID) technology. Following the same sensing mechanism, a slotted patch antenna sensor of smaller size is designed. The antenna detours surface current using slot patterns, so that the effective electrical length is kept similar as previous folded patch antenna. As a result, the sensor footprint is reduced and the antenna resonance frequency is maintained within 900MHz RFID band. To validate the sensor performance for crack sensing, a fatigue crack experiment is conducted on a steel compact-tension specimen. A slotted patch antenna sensor is installed at the center of the A36 steel specimen. For wireless interrogation, a Yagi reader antenna is placed 36 in. away from the antenna sensor to wirelessly measure the resonance frequency shift of the antenna sensor. The measurement is taken after every 10,000 loading cycles, till the specimen fails. Meanwhile, the length and width of the fatigue crack are also recorded. Finally, the resonance frequency shift of the antenna sensor is correlated with crack length and width at each loading stage.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, we show that the usual material form of the Newmark-time-stepping scheme of finite rotations is only a simplified version of the correct formula. This is because spin material rotation vectors, angular velocity vectors and angular accelerations vectors belong to different tangential vector spaces of a manifold at separate time moments. We give corrected Newmark-time-stepping schemes for material description and for spatial description. © 2001 Elsevier Science B.V. All rights reserved.
Contribution: organisation=tme,FACT1=1
Research output: Contribution to journal › Article › Scientific › peer-review
Dimensional Analysis Conceptual Modelling (DACM) is a framework used for conceptual modelling and simulation in system and product designs. The framework is based on cause–effect analysis between variables and functions in a problem. This article presents an approach that mobilizes concepts from the DACM framework to assist solve high-dimensional expensive optimization problems with lower computational costs. The latter fundamentally utilizes theories and concepts from well-practised dimensional analysis, functional modelling and bond graphing. Statistical design-of-experiments theory is also utilized in the framework to measure impact levels of variables towards the objective. Simplifying as well as decomposing followed by optimization of expensive problems are the focuses of the article. To illustrate the approach, a case study on the performance optimization of a cross-flow micro hydro turbine is presented. The customized DACM framework assisted optimization approach converges faster and returns better results than the one without. A single-step simplification approach is employed in the case study and it returns a better average optimization result with about only one fifth of the function evaluations compared to optimization using the original model.
Research output: Contribution to journal › Article › Scientific › peer-review
By 2020, unmanned ships such as remotely controlled boats and autonomous vessels would become operational, marking a technological revolution for the maritime industry. Such ships are expected to serve needs ranging from coastal ferries to open sea cargo handling. In this paper we detail the security vulnerabilities of such unmanned ships. The attack surface as well as motivations for attack attempts also are discussed to provide a perspective of how and why attacks are undertaken. Finally defence strategies are proposed as countermeasures.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Quantum walks serve as novel tools for performing efficient quantum computation and simulation. In a recent experimental demonstration [1] we have realized photonic quantum walks for simulating cyclic quantum systems, such as hexagonal lattices or aromatic molecules like benzene. In that experiment we explored the wave function dynamics and the probability distribution of a quantum particle located on a six-site system (with periodic boundary conditions), alongside with simpler demonstration of three- and four-site systems, under various initial conditions. Localization and revival of the wave function were demonstrated. After revisiting that experiment we will theoretically analyze the case of noisy quantum walks by implementing the bit-phase flip channel. This will allow us to draw conclusions regarding the performance of our photonic quantum simulation in noisy environments. Finally, we will briefly outline some future directions.
jufoid=71479
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper we present ensembles of classifiers for automated animal audio classification, exploiting different data augmentation techniques for training Convolutional Neural Networks (CNNs). The specific animal audio classification problems are i) birds and ii) cat sounds, whose datasets are freely available. We train five different CNNs on the original datasets and on their versions augmented by four augmentation protocols, working on the raw audio signals or their representations as spectrograms. We compared our best approaches with the state of the art, showing that we obtain the best recognition rate on the same datasets, without ad hoc parameter optimization. Our study shows that different CNNs can be trained for the purpose of animal audio classification and that their fusion works better than the stand-alone classifiers. To the best of our knowledge this is the largest study on data augmentation for CNNs in animal audio classification audio datasets using the same set of classifiers and parameters. Our MATLAB code is available at https://github.com/LorisNanni.
Research output: Contribution to journal › Article › Scientific › peer-review
In spite of the advances in theory of formal specifications, they have not gained a wide popularity in the software development industry. This could be due to difficulties in understanding them or positioning them into the current work practices, however, we believe that one major problem is that the tool support still does not make the use of the formal specifications easy enough for the software developer. We discuss the required functionality for comprehensive tool support for executable DisCo specifications, and propose a tool architecture based on database technology, and finally, discuss our implementation of the core part of the tool set.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Recently, the use of neural networks for image classification has become widely spread. Thanks to the availability of increased computational power, better performing architectures have been designed, such as the Deep Neural networks. In this work, we propose a novel image representation framework exploiting the Deep p- Fibonacci scattering network. The architecture is based on the structured p-Fibonacci scattering over graph data. This approach allows to provide good accuracy in classification while reducing the computational complexity. Experimental results demonstrate that the performance of the proposed method is comparable to state-of-the-art unsupervised methods while being computationally more efficient.
jufoid=84313
EXT="Battisti, F."
EXT="Carli, M."
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Employees in organizations face technostress that is, stress from information technology (IT) use. Although technostress is a highly prevalent organizational phenomenon, there is a lack of theory-based understanding on how IT users can cope with it. We theorize and validate a model for deliberate proactive and instinctive reactive coping for technostress. Drawing from theories on coping, our model posits that the reactive coping behaviors of distress venting and distancing from IT can alleviate technostress by diminishing the negative effect of technostress creators on IT-enabled productivity. The proactive coping behaviors of positive reinterpretation and IT control can help IT users by influencing the extent to which reactive coping behaviors are effective and by positively influencing IT-enabled productivity. The findings of a cross-sectional survey study of 846 organizational IT users support the model. The paper provides a new theoretical contribution by identifying ways in which organizational IT users can cope with technostress.
EXT="Makkonen, Markus"
Research output: Contribution to journal › Article › Scientific › peer-review
As the ratification of 5G New Radio technology is being completed, enabling network architectures are expected to undertake a matching effort. Conventional cloud and edge computing paradigms may thus become insufficient in supporting the increasingly stringent operating requirements of intelligent IoT devices that can move unpredictably and at high speeds. Complementing these, the concept of fog emerges to deploy cooperative cloud-like functions in the immediate vicinity of various moving devices, such as connected and autonomous vehicles, on the road and in the air. Envisioning the gradual evolution of these infrastructures toward an increasingly denser geographical distribution of fog functionality, we in this work put forward the vision of dense moving fog for intelligent IoT applications. To this aim, we review the recent powerful enablers, outline the main challenges and opportunities, and corroborate the performance benefits of collaborative dense fog operation in a characteristic use case fe.
Research output: Contribution to journal › Article › Scientific › peer-review
Density functional theory calculations have been carried out to investigate 3d, Pd and Pt transition metal (TM) atoms exohedrally and endohedrally doped B80 fullerene. We find that the most preferred doping site of the TM atom gradually moves from the outer surface (TM = Sc), to the inner surface (TM = Ti and V) and the center (TM = Cr, Mn, Fe and Zn), then to the outer surface (TM = Co, Ni, Cu, Pd, and Pt) again with the TM atom varying from Sc to Pt. From the formation energy calculations, we find that doping TM atom can further improve the stability of B80 fullerene. The magnetic moments of doped V, Cr, Mn, Fe, Co and Ni atoms are reduced from their free-atom values and other TM atoms are completely quenched. Charge transfer and hybridization between 4s and 3d states of TM and 2s and 2p states of B were observed. The energy gaps of TM@B80 are usually smaller than that of the pure B80. Endohedrally doped B80 fullerene with two Mn and two Fe atoms were also considered, respectively. It is found that the antiferromagnetic (AFM) state is more energetically favorable than the ferromagnetic (FM) state for Mn2- and Fe2@B80. The Mn and Fe atoms carry the residual magnetic moments of ∼ 3 μB and 2 μB in the AFM states.
Research output: Contribution to journal › Article › Scientific › peer-review
RGB-D sensors are relatively inexpensive and are commercially available off-the-shelf. However, owing to their low complexity, there are several artifacts that one encounters in the depth map like holes, mis-alignment between the depth and color image and lack of sharp object boundaries in the depth map. Depth map generated by Kinect cameras also contain a significant amount of missing pixels and strong noise, limiting their usability in many computer vision applications. In this paper, we present an efficient hole filling and damaged region restoration method that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a modified exemplar-based inpainting and LPA-ICI filtering by exploiting the correlation between color and depth values in local image neighborhoods. As a result, edges of the objects are sharpened and aligned with the objects in the color image. Several examples considered in this paper show the effectiveness of the proposed approach for large holes removal as well as recovery of small regions on several test images of depth maps. We perform a comparative study and show that statistically, the proposed algorithm delivers superior quality results compared to existing algorithms.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper presents an end-user oriented approach of describing mobile devices as RESTful services. The mobile services are provided to the end-users through a centralized server. To enable plugging of devices, they provide a machine-processable device description with detailed specification of their RESTful API. The device description is used to generate required user interface as well as generating the RESTful invocations. We provide general guidelines on how to design a REST API for a mobile device and a device description for machine-to-machine interactions. The approach is demonstrated by building a centralized marketplace to promote and use available mobile services. The central marketplace acts as a broker for the dynamic mobile services. In addition, we use two case study applications to demonstrate the service registration, provisioning, and usage.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this work, a slotted patch antenna is employed as a wireless sensor for monitoring structural strain and fatigue crack. Using antenna miniaturization techniques to increase the current path length, the footprint of the slotted patch antenna can be reduced to one quarter of a previously presented folded patch antenna. Electromagnetic simulations show that the antenna resonance frequency varies when the antenna is under strain. The resonance frequency variation can be wirelessly interrogated and recorded by a radiofrequency identification (RFID) reader, and can be used to derive strain/deformation. The slotted patch antenna sensor is entirely passive (battery-free), by exploiting an inexpensive off-the-shelf RFID chip that receives power from the wireless interrogation by the reader.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Development of multimedia systems that can be targeted to different platforms is challenging due to the need for rigorous integration between high-level abstract modeling, and low-level synthesis and optimization. In this paper, a new dataflow-based design tool called the targeted dataflow interchange format is introduced for retargetable design, analysis, and implementation of embedded software for multimedia systems. Our approach provides novel capabilities, based on principles of task-level dataflow analysis, for exploring and optimizing interactions across design components; object-oriented data structures for encapsulating contextual information for components; a novel model for representing parameterized schedules that are derived from repetitive graph structures; and automated code generation for programming interfaces and low-level customizations that are geared toward high-performance embedded-processing architectures. We demonstrate our design tool for cross-platform application design, parameterized schedule representation, and associated dataflow graph-code generation using a case study centered around an image registration application.
Research output: Contribution to journal › Article › Scientific › peer-review
In this technical note we study robust output tracking for autonomous linear systems. We introduce a new approach to designing robust controllers using a recent observation that a full internal model is not always necessary for robustness. Especially this may be the case if the control law is only required to be robust with respect to a specific predetermined class of uncertainties in the parameters of the plant. The results are illustrated with an example on robust output tracking for coupled harmonic oscillators.
Research output: Contribution to journal › Article › Scientific › peer-review
The use of extremely high frequency (EHF) bands, known as millimeter-wave (mmWave) frequencies, requires densification of cells to maintain system performance at required levels. This may lead to potential increase of interference in practical mmWave networks, thus making it the limiting factor. On the other hand, attractive utilization of dual-polarized antennas may improve over this situation by mitigating some of the interfering components, which can be employed as part of interference control techniques. In this paper, an accurate two-stage ray-based characterization is conducted that models interference-related metrics while taking into account a detailed dual- polarized antenna model. In particular, we confirm that narrower pencil-beam antennas (HPBW = 13) have significant advantages as compared to antennas with relatively narrow beams (HPBW = 20 and HPBW = 50) in the environments with high levels of interference. Additionally, we demonstrate that in the Manhattan grid deployment a transition from interference- to noise-limited regime and back occurs at the cell inter-site distances of under 90 m and over 180 m, respectively.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Wrist photoplethysmography (PPG) allows unobtrusive monitoring of the heart rate (HR). PPG is affected by the capillary blood perfusion and the pumping function of the heart, which generally deteriorate with age and due to the presence of cardiac arrhythmia. The performance of wrist PPG in monitoring beat-to-beat HR in older patients with arrhythmia has not been reported earlier. We monitored PPG from wrist in 18 patients recovering from surgery in the post-anesthesia care unit, and evaluated the inter-beat interval (IBI) detection accuracy against ECG based R-to-R intervals (RRI). Nine subjects had sinus rhythm (SR, 68.0y ± 10.2y, 6 males) and nine subjects had atrial fibrillation (AF, 71.3y ± 7.8y, 4 males) during the recording. For the SR group, 99.44% of the beats were correctly identified, 2.39% extra beats were detected, and the mean absolute error (MAE) was 7.34 ms. For the AF group, 97.49% of the heartbeats were correctly identified, 2.26% extra beats were detected, and the MAE was 14.31 ms. IBI from the PPG were hence in close agreement with the ECG reference in both groups. The results suggest that wrist PPG provides a comfortable alternative to ECG during low motion and can be used for long-term monitoring and screening of AF episodes.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The paper proposes a method for the detection of bubble-like transparent objects in a liquid. The detection problem is non-trivial since bubble appearance varies considerably due to different lighting conditions causing contrast reversal and multiple interreflections. We formulate the problem as the detection of concentric circular arrangements (CCA). The CCAs are recovered in a hypothesize-optimize-verify framework. The hypothesis generation is based on sampling from the partially linked components of the non-maximum suppressed responses of oriented ridge filters, and is followed by the CCA parameter estimation. Parameter optimization is carried out by minimizing a novel cost-function. The performance was tested on gas dispersion images of pulp suspension and oil dispersion images. The mean error of gas/oil volume estimation was used as a performance criterion due to the fact that the main goal of the applications driving the research was the bubble volume estimation. The method achieved 28 and 13 % of gas and oil volume estimation errors correspondingly outperforming the OpenCV Circular Hough Transform in both cases and the WaldBoost detector in gas volume estimation.
Research output: Contribution to journal › Article › Scientific › peer-review
Haptics has been an integral part of multimodal systems in Human Computer Interaction (HCI). The ability to touch and sense virtual components of any system has long been the holy grail of HCI, which is particularly useful in mission critical environments where other modalities are weakened by environmental noise. Haptics also compliments most modalities of interaction by reinforcing the intimate and personal aspect of interaction. Haptics becomes much more important in environments that prove to be far too noisy for audio feedback.The driving environment is one such area, which the addition of haptics is not just additive, but critical in HCI. However, most of the research on haptic feedback in the car has been conducted using vibro-tactile feedback. In this paper, we present a system in which we have developed a novel haptic feedback environment using pneumatic and vibrotactile technologies, to facilitate in carcommunication, using the In-vehicle Infotainment System. Our aim was to build on the user haptic perception and experience the advance multimodal interaction system by utilizing available feedback techniques in, in-car interaction. The qualitative results of our study show that haptic feedback has great potential for safety and communication use, but the difficulty in interpreting haptic signals requires additional translation means ('semantic linkages'), to support the right interpretation of the haptic information.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
With the UK climate projected to warm in future decades, there is an increased research focus on the risks of indoor overheating. Energy-efficient building adaptations may modify a buildings risk of overheating and the infiltration of air pollution from outdoor sources. This paper presents the development of a national model of indoor overheating and air pollution, capable of modelling the existing and future building stocks, along with changes to the climate, outdoor air pollution levels, and occupant behaviour. The model presented is based on a large number of EnergyPlus simulations run in parallel. A metamodelling approach is used to create a model that estimates the indoor overheating and air pollution risks for the English housing stock. The performance of neural networks (NNs) is compared to a support vector regression (SVR) algorithm when forming the metamodel. NNs are shown to give almost a 50% better overall performance than SVR.
Research output: Contribution to journal › Article › Scientific › peer-review
Graphical user interfaces are widely common and present in everyday human–computer interaction, dominantly in computers and smartphones. Today, various actions are performed via graphical user interface elements, e.g., windows, menus and icons. An attractive user interface that adapts to user needs and preferences is progressively important as it often allows personalized information processing that facilitates interaction. However, practitioners and scholars have lacked an instrument for measuring user perception of aesthetics within graphical user interface elements to aid in creating successful graphical assets. Therefore, we studied dimensionality of ratings of different perceived aesthetic qualities in GUI elements as the foundation for the measurement instrument. First, we devised a semantic differential scale of 22 adjective pairs by combining prior scattered measures. We then conducted a vignette experiment with random participant (n = 569) assignment to evaluate 4 icons from a total of pre-selected 68 game app icons across 4 categories (concrete, abstract, character and text) using the semantic scales. This resulted in a total of 2276 individual icon evaluations. Through exploratory factor analyses, the observations converged into 5 dimensions of perceived visual quality: Excellence/Inferiority, Graciousness/Harshness, Idleness/Liveliness, Normalness/Bizarreness and Complexity/Simplicity. We then proceeded to conduct confirmatory factor analyses to test the model fit of the 5-factor model with all 22 adjective pairs as well as with an adjusted version of 15 adjective pairs. Overall, this study developed, validated, and consequently presents a measurement instrument for perceptions of visual qualities of graphical user interfaces and/or singular interface elements (VISQUAL) that can be used in multiple ways in several contexts related to visual human-computer interaction, interfaces and their adaption.
Research output: Contribution to journal › Article › Scientific › peer-review
Context: DevOps is considered important in the ability to frequently and reliably update a system in operational state. DevOps presumes cross-functional collaboration and automation between software development and operations. DevOps adoption and implementation in companies is non-trivial due to required changes in technical, organisational and cultural aspects. Objectives: This exploratory study presents detailed descriptions of how DevOps is implemented in practice. The context of our empirical investigation is web application and service development in small and medium sized companies. Method: A multiple-case study was conducted in five different development contexts with successful DevOps implementations since its benefits, such as quick releases and minimum deployment errors, were achieved. Data was mainly collected through interviews with 26 practitioners and observations made at the companies. Data was analysed by first coding each case individually using a set of predefined themes and thereafter perform a cross-case synthesis. Results: Our analysis yielded some of the following results: (i) software development team attaining ownership and responsibility to deploy software changes in production is crucial in DevOps. (ii) toolchain usage and support in deployment pipeline activities accelerates the delivery of software changes, bug fixes and handling of production incidents. (ii) the delivery speed to production is affected by context factors, such as manual approvals by the product owner (iii) steep learning curve for new skills is experienced by both software developers and operations staff, who also have to cope with working under pressure. Conclusion: Our findings contributes to the overall understanding of DevOps concept, practices and its perceived impacts, particularly in small and medium sized companies. We discuss two practical implications of the results.
EXT="Mikkonen, Tommi"
Research output: Contribution to journal › Article › Scientific › peer-review
This paper addresses the challenge of dense pixel correspondence estimation between two images. This problem is closely related to optical flow estimation task where Con-vNets (CNNs) have recently achieved significant progress. While optical flow methods produce very accurate results for the small pixel translation and limited appearance variation scenarios, they hardly deal with the strong geometric transformations that we consider in this work. In this paper, we propose a coarse-to-fine CNN-based framework that can leverage the advantages of optical flow approaches and extend them to the case of large transformations providing dense and subpixel accurate estimates. It is trained on synthetic transformations and demonstrates very good performance to unseen, realistic, data. Further, we apply our method to the problem of relative camera pose estimation and demonstrate that the model outperforms existing dense approaches.
jufoid=57596
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper digs deeper into factors that influence egocentric gaze. Instead of training deep models for this purpose in a blind manner, we propose to inspect factors that contribute to gaze guidance during daily tasks. Bottom-up saliency and optical flow are assessed versus strong spatial prior baselines. Task-specific cues such as vanishing point, manipulation point, and hand regions are analyzed as representatives of top-down information. We also look into the contribution of these factors by investigating a simple recurrent neural model for ego-centric gaze prediction. First, deep features are extracted for all input video frames. Then, a gated recurrent unit is employed to integrate information over time and to predict the next fixation. We also propose an integrated model that combines the recurrent model with several top-down and bottom-up cues. Extensive experiments over multiple datasets reveal that (1) spatial biases are strong in egocentric videos, (2) bottom-up saliency models perform poorly in predicting gaze and underperform spatial biases, (3) deep features perform better compared to traditional features, (4) as opposed to hand regions, the manipulation point is a strong influential cue for gaze prediction, (5) combining the proposed recurrent model with bottom-up cues, vanishing points and, in particular, manipulation point results in the best gaze prediction accuracy over egocentric videos, (6) the knowledge transfer works best for cases where the tasks or sequences are similar, and (7) task and activity recognition can benefit from gaze prediction. Our findings suggest that (1) there should be more emphasis on hand-object interaction and (2) the egocentric vision community should consider larger datasets including diverse stimuli and more subjects.
jufoid=57596
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Spectrally non-contiguous transmissions pose serious transceiver design challenges due to the nonlinear PA. When two or more non-contiguous carriers with close proximity are amplified by the same PA, spurious emissions inside or in the vicinity of the transmitter RF band are created. These spurious emissions may violate emission limits or otherwise compromise network coverage and reliability. Lowering the transmit power is a straightforward remedy, but it will reduce throughput, coverage, and power efficiency of the device. To improve linearity without sacrificing performance, several DPD techniques have recently been proposed to target the spurious emissions explicitly. These techniques are designed to minimize the computational and hardware complexity of DPD, thus making them better suited for mobile terminals and other lowcost devices. In this article, these recent advances in DPD for non-contiguous transmission scenarios are discussed, with a focus on mitigating the spurious emissions in the concrete example case of non-contiguous dual-carrier transmission. The techniques are compared to more traditional DPD approaches in terms of their computational and hardware complexities, as well as linearization performance.
Research output: Contribution to journal › Article › Scientific › peer-review
This article presents results on how students became engaged and motivated when using digital storytelling in knowledge creation in Finland, Greece and California. The theoretical framework is based on sociocultural theories. Learning is seen as a result of dialogical interactions between people, substances and artefacts. This approach has been used in the creation of the Global Sharing Pedagogy (GSP) model for the empirical study of student levels of engagement in learning twenty-first century skills. This model presents a set of conceptual mediators for student-driven knowledge creation, collaboration, networking and digital literacy. Data from 319 students were collected using follow-up questionnaires after the digital storytelling project. Descriptive statistical methods, correlations, analysis of variance and regression analysis were used. The mediators of the GSP model strongly predicted student motivation and enthusiasm as well as their learning outcomes. The digital storytelling project, using the technological platform Mobile Video Experience (MoViE), was very successful in teaching twenty-first century skills.
Research output: Contribution to journal › Article › Scientific › peer-review
How to measure and train for adaptability has emerged as a priority in military contexts in response to emergent threats and technologies associated with asymmetric warfare. While much research effort has attempted to characterize adaptability in terms of accuracy and response time using traditional executive function cognitive tests, it remains unclear and undefined how adaptability should be measured and thus how simulation-based training should be designed to instigate and modulate adaptable behavior and skills. Adaptable reasoning is well-exemplified in the rescue effort of Apollo 13 by NASA engineers who repurposed available materials available in the spacecraft to retrieve the astronauts safely back to earth. Military leaders have anecdotally referred to adaptability as 'improvised thinking' that repurposes 'blocks of knowledge' to device alternative solutions in response to changes in conditions affecting original tasks while maintaining end-state commander's intent. We review a previous feasibility study that explored the specification of Reusable Modeling Primitives for models and simulation systems building on Dimensional Analysis and Design Structure Matrix for Complexity Management formal methods. This Dimensional Analysis Conceptual Modeling (DACM) paradigm is rooted in science and engineering critical thinking and is consistent with the stated anecdotal premises as it facilitates the objective dimensional decomposition of a problem space to guide the corresponding dimensional composition of possible solutions. Arguably, adaptability also concerns the capability to overcome contradictions, detections, and reductions, which we present in an exemplar addressing the contradiction of increased drag due to increased velocity inherent to torpedoes. We propose that the DACM paradigm may be repurposed as a critical thinking framework for teaching the identification of relevant components in a theater of military operations and how the properties of those components may be repurposed to fashion alternative solutions to tasks involving navigation, call-for-fires, line-of-sight cover, weather and atmospheric effect responses, and others.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
With the increasing design dimensionality, it is more difficult to solve Multidisciplinary design optimization (MDO) problems. To reduce the dimensionality of MDO problems, many MDO decomposition strategies have been developed. However, those strategies consider the design problem as a black-box function. In practice, the designers usually have certain knowledge of their problem. In this paper, a method leveraging causal graph and qualitative analysis is developed to reduce the dimensionality of the MDO problem by systematically modeling and incorporating knowledge of the design problem. Causal graph is employed to show the input-output relationships between variables. Qualitative analysis using design structure matrix (DSM) is carried out to automatically find the variables that can be determined without optimization. According to the weight of variables, the MDO problem is divided into two sub-problems, the optimization problem with respect to important variables, and the one with less important variables. The novel method is performed to solve an aircraft concept design problem and the results show that the new dimension reduction and decomposition method can significantly improve optimization efficiency.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
We study distance-based classification of human actions and introduce a new metric learning approach based on logistic discrimination for the determination of a low-dimensional feature space of increased discrimination power. We argue that for effective distance-based classification, both the optimal projection space and the optimal class representation should be determined. We qualitatively and quantitatively illustrate the superiority of the proposed approach to metric learning approaches employing the class mean for class representation. We also introduce extensions of the proposed metric learning approach to allow for richer class representations and to operate in arbitrary-dimensional Hilbert spaces for non-linear feature extraction and classification. Experimental results denote that the performance of the proposed distance-based classification schemes is comparable (or even better) to that of Support Vector Machine classifier (in both the linear and kernel cases) which is currently the standard choice for human action recognition.
Research output: Contribution to journal › Article › Scientific › peer-review
Speech separation algorithms are faced with a difficult task of producing high degree of separation without containing unwanted artifacts. The time-frequency (T-F) masking technique applies a real-valued (or binary) mask on top of the signal's spectrum to filter out unwanted components. The practical difficulty lies in the mask estimation. Often, using efficient masks engineered for separation performance leads to presence of unwanted musical noise artifacts in the separated signal. This lowers the perceptual quality and intelligibility of the output. Microphone arrays have been long studied for processing of distant speech. This work uses a feed-forward neural network for mapping microphone array's spatial features into a T-F mask. Wiener filter is used as a desired mask for training the neural network using speech examples in simulated setting. The T-F masks predicted by the neural network are combined to obtain an enhanced separation mask that exploits the information regarding interference between all sources. The final mask is applied to the delay-and-sum beamformer (DSB) output. The algorithm's objective separation capability in conjunction with the separated speech intelligibility is tested with recorded speech from distant talkers in two rooms from two distances. The results show improvement in instrumental measure for intelligibility and frequency-weighted SNR over complex-valued non-negative matrix factorization (CNMF) source separation approach, spatial sound source separation, and conventional beamforming methods such as the DSB and minimum variance distortionless response (MVDR).
Research output: Contribution to journal › Article › Scientific › peer-review
Body Area Network (BAN) provide critical data in healthcare monitoring environments, where such monitoring can be performed in a ubiquitous manner using various miniature device technologies. However, a key requirement in supporting the full capacity of a BAN is an efficient distribution, processing and application of the acquired data. The architecture and applications which capitalize on the huge potential of this data, provide significant added value to BANs. This paper proposes an architecture which is service oriented and integrates the data produced by BANs into a healthcare environment, supporting remote interactions between medical officers to maximise patient care. The dynamic interaction of distributed services in this diverse environment is a key ingredient in the way technology can enhance healthcare. The architecture defines group services which facilitate the control of the dynamic behaviour of services within this heterogeneous environment.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Since the birth of computer and networks, fuelled by pervasive computing, Internet of Things and ubiquitous connectivity, the amount of data stored and transmitted has exponentially grown through the years. Due to this demand, new storage solutions are needed. One promising media is the DNA as it provides numerous advantages, which includes the ability to store dense information while achieving long-term reliability. However, the question as to how the data can be retrieved from a DNA-based archive, still remains. In this paper, we aim to address this question by proposing a new storage solution that relies on bacterial nanonetworks properties. Our solution allows digitally-encoded DNA to be stored into motility-restricted bacteria, which compose an archival architecture of clusters, and to be later retrieved by engineered motile bacteria, whenever reading operations are needed. We conducted extensive simulations, in order to determine the reliability of data retrieval from motility-restricted storage clusters, placed spatially at different locations. Aiming to assess the feasibility of our solution, we have also conducted wet lab experiments that show how bacteria nanonetworks can effectively retrieve a simple message, such as "Hello World", by conjugation with motility-restricted bacteria, and finally mobilize towards a target point for delivery.
Research output: Contribution to journal › Article › Scientific › peer-review
Carbohydrates constitute a structurally and functionally diverse group of biological molecules and macromolecules. In cells they are involved in, e.g., energy storage, signaling, and cell-cell recognition. All of these phenomena take place in atomistic scales, thus atomistic simulation would be the method of choice to explore how carbohydrates function. However, the progress in the field is limited by the lack of appropriate tools for preparing carbohydrate structures and related topology files for the simulation models. Here we present tools that fill this gap. Applications where the tools discussed in this paper are particularly useful include, among others, the preparation of structures for glycolipids, nanocellulose, and glycans linked to glycoproteins. The molecular structures and simulation files generated by the tools are compatible with GROMACS.
Research output: Contribution to journal › Article › Scientific › peer-review
We demonstrate a novel approach for electron-beam lithography (EBL) of periodic nanostructures. This technique can rapidly produce arrays of various metallic and etched nanostructures with line and pitch dimensions approaching the beam spot size. Our approach is based on often neglected functionality which is inherent in most modern EBL systems. The raster/vector beam exposure system of the EBL software is exploited to produce arrays of pixel-like spots without the need to define coordinates for each spot in the array. Producing large arrays with traditional EBL techniques is cumbersome during pattern design, usually leads to large data files and easily results in system memory overload during patterning. In Dots-on-The-fly (DOTF) patterning, instead of specifying the locations of individual spots, a boundary for the array is given and the spacing between spots within the boundary is specified by the beam step size. A designed pattern element thus becomes a container object, with beam spacing acting as a parameterized location list for an array of spots confined by that container. With the DOTF method, a single pattern element, such as a square, rectangle or circle, can be used to produce a large array containing thousands of spots. In addition to simple arrays of nano-dots, we expand the technique to produce more complex, highly tunable arrays and structures on substrates of silicon, ITO/ FTO coated glass, as well as uncoated fused silica, quartz and sapphire.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
In this paper, we propose an extension of the Extreme Learning Machine algorithm for Single-hidden Layer Feedforward Neural network training that incorporates Dropout and DropConnect regularization in its optimization process. We show that both types of regularization lead to the same solution for the network output weights calculation, which is adopted by the proposed DropELM network. The proposed algorithm is able to exploit Dropout and DropConnect regularization, without computationally intensive iterative weight tuning. We show that the adoption of such a regularization approach can lead to better solutions for the network output weights. We incorporate the proposed regularization approach in several recently proposed ELM algorithms and show that their performance can be enhanced without requiring much additional computational cost.
Research output: Contribution to journal › Article › Scientific › peer-review
We present a dual convolutional neural network (dCNN) architecture for extracting multi-scale features from histological tissue images for the purpose of automated characterization of tissue in digital pathology. The dual structure consists of two identical convolutional neural networks applied to input images with different scales, that are merged together and stacked with two fully connected layers. It has been acknowledged that deep networks can be used to extract higher-order features, and therefore, the network output at final fully connected layer was used as a deep dCNN feature vector. Further, engineered features, shown in previous studies to capture important characteristics of tissue structure and morphology, were integrated to the feature extractor module. The acquired quantitative feature representation can be further utilized to train a discriminative model for classifying tissue types. Machine learning based methods for detection of regions of interest, or tissue type classification will advance the transition to decision support systems and computer aided diagnosis in digital pathology. Here we apply the proposed feature-augmented dCNN method with supervised learning in detecting cancerous tissue from whole slide images. The extracted quantitative representation of tissue histology was used to train a logistic regression model with elastic net regularization. The model was able to accurately discriminate cancerous tissue from normal tissue, resulting in blockwise AUC=0.97, where the total number of analyzed tissue blocks was approximately 8.3 million that constitute the test set of 75 whole slide images.
EXT="Nykter, Matti"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Dynamic laser speckle analysis is non-destructive detection of physical or biological activity through statistical processing of speckle patterns on the surface of diffusely reflecting objects. This method is sensitive to microscopic changes of the surface over time and needs simple optical means. Advances in computers and 2D optical sensors forced development of pointwise algorithms. They rely on acquisition of a temporal sequence of correlated speckle images and generate activity data as a 2D spatial contour map of the estimate of a given statistical parameter. The most widely used pointwise estimates are the intensity-based estimates which compose each map entry from a time sequence of intensity values taken at one and the same pixel in the acquired speckle images. Accuracy of the pointwise approach is strongly affected by the signal-dependent nature of the speckle data when the spread of intensity fluctuations depends on the intensity itself. The latter leads to erroneous activity determination at non-uniform distribution of intensity in the laser beam for the non-normalized estimates. Normalization of the estimates, introduces errors. We propose to apply binarization to the acquired speckle images by comparing the intensity values in the temporal sequence for a given spatial point to the mean intensity value estimated for this point and to evaluate a polar correlation function. Efficiency of this new processing algorithm is checked both by simulation and experiment.
JUFOID=71479
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The modelling and design of Simulated moving bed (SMB) processes is normally done using the True moving bed (TMB) approximation. Several studies show that average values obtained at cyclic steady state for SMB units approach the TMB unit at steady state and that this approach is better as the number of columns in the SMB increases. However, studies that evaluate this equivalence under dynamic conditions are scarce. The objective of this work is to perform an analysis of the transient behaviour of two SMB units, with four and eight columns, and compare the results with the ones obtained for a TMB unit. An analysis of the impact of operating variables on the processes performance parameters is performed. The results show that TMB/SMB equivalence is valid only for conditions that do not violate the regeneration/separation regions and that the transient behaviour of the four columns SMB can resemble more the TMB.
Research output: Contribution to journal › Article › Scientific › peer-review
Network-assisted device-to-device (D2D) connectivity is a next-generation wireless technology that facilitates direct user contacts in physical proximity while taking advantage of the flexible and ubiquitous control coming from the cellular infrastructure. This novel type of user interactions creates challenges in constructing meaningful proximity-based applications and services that would enjoy high levels of user adoption. Accordingly, to enable such adoption a comprehensive understanding of user sociality and trust factors is required together with respective technology enablers for secure D2D communications, especially when cellular control is not available at all times. In this paper, we study an important aspect of secure communications over proximity-based direct links, with a primary emphasis on developing the corresponding proof-of-concept implementation. Our developed prototype offers rich functionality for dynamic management of security functions in proximate devices, whenever a new device joins the secure group of users or an existing one leaves it. To evaluate the behavior of our implemented application, we characterize its performance in terms of computation and transmission delays from the user perspective.
EXT="Niutanen, Jussi"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper outlines the design and development process of the Dynamic Audio Motion (Dynamo) concept. The Dynamo audio engine was developed for driving dynamic sound interaction states via custom made finite state machine. Further, a generative sound design approach was employed for creating sonic and musical structures. Designed dynamic sound interactions were tested in an embodied information wall application with endusers. During the testing situation, end-users engaged in a reflective creation process providing valuable insight of their experiences of using the system. In this paper we present key questions driving the research, theoretical background, research approach, an audio engine development process, and end-user research activities. The results indicate that dynamic sound interactions supported people's personal, emotional, and creative needs in the design context.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This paper presents an effective model predictive current control scheme for induction machines driven by a three-level neutral point clamped inverter, called variable switching point predictive current control. Despite the fact that direct, enumeration-based model predictive control (MPC) strategies are very popular in the field of power electronics due to their numerous advantages such as design simplicity and straightforward implementation procedure, they carry two major drawbacks. These are the increased computational effort and the high ripples on the controlled variables, resulting in a limited applicability of such methods. The high ripples occur because in direct MPC algorithms the actuating variable can only be changed at the beginning of a sampling interval. A possible remedy for this would be to change the applied control input within the sampling interval, and thus to apply it for a shorter time than one sample. However, since such a solution would lead to an additional overhead which is crucial especially for multilevel inverters, a heuristic preselection of the optimal control action is adopted to keep the computational complexity at bay. Experimental results are provided to verify the potential advantages of the proposed strategy.
Research output: Contribution to journal › Article › Scientific › peer-review
The sidelobe level of a base station antenna is one of the important parameters to describe the performance of an antenna array. Given a required value of the sidelobe level, we can obtain a set of initial phases, and then further get a set of cable lengths. However, a tolerance (or error range) associated with manufacturing techniques will introduce an error in each cable length, thereby influencing the sidelobe level. This paper uses the knowledge of probability and mathematical statistics to make a statistical analysis for the reliability of the first sidelobe of the antenna array based on Monte Carlo simulations. We also obtain a distribution curve of reliabilities of the first sidelobe versus phase tolerances, which can bring great convenience for practical applications.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
This study comprehends the effect of a typical paint baking process on the properties of press hardened boron steels. Bake hardening response of four 22MnB5 steels with different production histories and two other boron steels of 30MnB5 and 34MnB5 type were analyzed. In particular, the effect of steel carbon content and prior austenite grain size on the strength of the bake hardening treated steels was investigated. Press hardened steels showed a relatively strong bake hardening effect, 80–160 MPa, in terms of yield strength. In addition, a clear decrease in ultimate tensile strength, 30–150 MPa, was observed due to baking. The changes in tensile strength showed a dependency on the carbon content of the steel: higher carbon content led to a larger decrease in tensile strength in general. Smaller prior austenite grain size resulted in a higher increase in yield strength despite the micro-alloyed 34MnB5. Transmission electron microscopy analysis carried out for the 34MnB5 revealed niobium rich mixture carbides of (Nb, Ti)C, which have most likely influenced the different bake hardening response. The present results indicate that the bake hardening response of press hardened steels depends on both prior austenite grain size and carbon content, but is also affected by other alloying elements. The observed correlation between prior austenite grain size and bake hardening response can be used to optimize the production of the standard grades of 22MnB5 and 30MnB5. In addition, our study suggests that baking process improves the post-uniform elongation and ductile fracture behavior of 34MnB5, but do not significantly influence the ductile fracture mechanisms of 22MnB5 and 30MnB5 representing lower strength levels.
Research output: Contribution to journal › Article › Scientific › peer-review
We present a detailed study on the influence of sonication energy and surfactant type on the electrical conductivity of nanocellulose-carbon nanotube (NFC-CNT) nanocomposite films. The study was made using a minimum amount of processing steps, chemicals and materials, to optimize the conductivity properties of free-standing flexible nanocomposite films. In general, the NFC-CNT film preparation process is sensitive concerning the dispersing phase of CNTs into a solution with NFC. In our study, we used sonication to carry out the dispersing phase of processing in the presence of surfactant. In the final phase, the films were prepared from the dispersion using centrifugal cast molding. The solid films were analyzed regarding their electrical conductivity using a four-probe measuring technique. We also characterized how conductivity properties were enhanced when surfactant was removed from nanocomposite films; to our knowledge this has not been reported previously. The results of our study indicated that the optimization of the surfactant type clearly affected the formation of freestanding films. The effect of sonication energy was significant in terms of conductivity. Using a relatively low 16 wt. % concentration of multiwall carbon nanotubes we achieved the highest conductivity value of 8.4 S/cm for nanocellulose-CNT films ever published in the current literature. This was achieved by optimizing the surfactant type and sonication energy per dry mass. Additionally, to further increase the conductivity, we defined a preparation step to remove the used surfactant from the final nanocomposite structure.
INT=mol,"Räty, Anna"
EXT="Harlin, Ali"
Research output: Contribution to journal › Article › Scientific › peer-review
Due to their unconstrained mobility and capability to carry goods or equipment, unmanned aerial vehicles (UAVs) or drones are considered as a part of the fifth-generation (5G) wireless networks and become attractive candidates to carry a base station (BS). As 5G requirements apply to a broad range of uses cases, it is of particular importance to satisfy those during spontaneous and temporary events, such as a marathon or a rural fair. To be able to support these scenarios, mobile operators need to deploy significant radio access resources quickly and on demand. Accordingly, by focusing on 5G cellular networks, we investigate the use of drone-assisted communication, where a drone is equipped with a millimeter-wave (mmWave) BS. Being a key technology for 5G, mmWave is able to facilitate the provisioning of the desired per-user data rates as drones arrive at the service area whenever needed. Therefore, in order to maximize the benefits of mmWave-drone-BS utilization, this paper proposes a methodology for its optimized deployment, which delivers the optimal height, coordinates, and coverage radius of the drone-BS by taking into account the human body blockage effects over a mmWave-specific channel model. Moreover, our methodology is able to maximize the number of offloaded users by satisfying the target signal quality at the cell edge and considering the maximum service capacity of the drone-BS. It was observed that the mmWave-specific features are extremely important to consider when targeting efficient drone-BS utilization and thus should be carefully incorporated into analysis.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
mcMTC is starting to play a central role in the industrial Internet of Things ecosystem and have the potential to create high-revenue businesses, including intelligent transportation systems, energy/ smart grid control, public safety services, and high-end wearable applications. Consequently, in the 5G of wireless networks, mcMTC have imposed a wide range of requirements on the enabling technology, such as low power, high reliability, and low latency connectivity. Recognizing these challenges, the recent and ongoing releases of LTE systems incorporate support for lowcost and enhanced coverage, reduced latency, and high reliability for devices at varying levels of mobility. In this article, we examine the effects of heterogeneous user and device mobility-produced by a mixture of various mobility patterns-on the performance of mcMTC across three representative scenarios within a multi-connectivity 5G network. We establish that the availability of alternative connectivity options, such as D2D links and drone-Assisted access, helps meet the requirements of mcMTC applications in a wide range of scenarios, including industrial automation, vehicular connectivity, and urban communications. In particular, we confirm improvements of up to 40 percent in link availability and reliability with the use of proximate connections on top of the cellular-only baseline.
Research output: Contribution to journal › Article › Scientific › peer-review
We analyze the changes in upper and lower limb pulse transit times (PTT) caused by peripheral artery disease (PAD) and its treatment with percutaneous transluminal angioplasty (PTA) of the superficial femoral artery. PTTs were extracted from the photoplethysmograms (PPG) recorded from an index finger and 2nd toes. PTTs were defined between the R-peaks of the ECG and different reference points of the (PPG): foot and peak points, maxima of 1st and 2nd derivative, and by means of intersecting tangents method. Also the PTTs between the toe and finger pulses were analyzed. Our sample consists of 24 subjects examined before and after the PTA and in 1-month follow-up visit. Also 28 older than 65 years controls having normal ankle-to-brachial pressure index (ABI) and no history in cardiovascular diseases as well as 21 younger subjects were examined. The differences between the groups and pre- and post-treatment phases were analyzed by means of non-parametric statistical tests. The changes in the PTTs of upper limb and non-treated lower limb were negligible. The agreement with the reference values, ABI and toe pressures, was studied by kappa-analysis, resulting in kappa-values of 0.33- 0.91. Differences in PTTs were found between pre-treatment state of the treated limb, post-treatment state and the follow-up visit, as well as between the pre-treatment state and controls. If patients' age and systolic blood pressure were taken into consideration, the method of lower limb PTT calculation from the peak point turns out feasible in finding the markers of PAD and monitoring post- treatment vascular remodellation.
Research output: Contribution to journal › Article › Scientific › peer-review
Pseudo-cylindrical panoramas represent the data distribution of spherical coordinates closely in two-dimensional domain due to the equidistant sampling of 360-degree scene. Therefore, unlike the cylindrical projections, they do not suffer from the over stretching in the polar areas. However, due to the non-rectangular format in effective picture area and sharp edges at its borders, the compression performance is inefficient. In this paper, we propose two methods which improve the compression performance of both intra-frame and inter-frame coding of pseudo-cylindrical panoramic content and meanwhile reduce the coding artifacts. In the intra-frame coding method, border edges are smoothed by modifying the content of the image in the non-effective picture area, which are cropped at the receiver side. In the inter-frame coding method, gaining the benefit of 360-degree property of the content, non-effective picture area of reference frames at border is filled with the content of the effective picture area from the opposite border to enhance the performance of motion compensation.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Purpose: To guide ultrasound-driven prostate photodynamic therapy using information from MRI-based treatment planning. Methods: Robust points matching (RPM) and thin plate splines (TPS) are used to solve correspondences and to map optimally positioned landmarks from MR images to transrectal ultrasound (TRUS) images. The algorithm uses a reduced number of anatomical markers that are initialized on the images. Results: Both phantom and patient data were used to evaluate precision and robustness of the method. Mean registration error (±standard deviation) was of 2.18. ±. 0.25. mm and 1.55. ±. 0.31. mm for patient prostate and urethra, respectively. Repeated tests with different markers initialization conditions showed that the quality of registration was neither influenced by the number of markers nor to the human observer. Conclusion: This method allows for satisfyingly accurate and robust non rigid registration of MRI and TRUS and provides practitioners with substantial help in mapping treatment planning from pre-operative MRI to interventional TRUS.
Research output: Contribution to journal › Article › Scientific › peer-review
We present a systematic study of the electronic, geometric, and magnetic properties of the actinide dioxides, UO 2, PuO 2, AmO 2, U 0.5Pu 0.5O 2, U 0.5Am 0.5O 2 and Pu 0.5Am 0.5O 2. For UO 2, PuO 2 and AmO 2, both density functional and hybrid density functional theory (DFT and HDFT) have been used. The fractions of exact HartreeFock (HF) exchange chosen were 25% and 40% for the hybrid density functional. For U 0.5Pu 0.5O 2, U 0.5Am 0.5O 2 and Pu 0.5Am 0.5O 2, only HDFT with 40% exact HF exchange was used. Each compound has been studied at the nonmagnetic, ferromagnetic and anti-ferromagnetic configurations, with and without spinorbit coupling (SOC). The lattice parameters, magnetic structures, bulk moduli, band gaps and density of states have been computed and compared to available experimental data and other theoretical results. Pure DFT fails to provide a satisfactory qualitative description of the electronic and magnetic structures of the actinide dioxides. On the other hand, HDFT performs very well in the prediction and description of the properties of the actinide dioxides. Our total energy calculations clearly indicate that the ground-state structures are anti-ferromagnetic for all actinide dioxides considered here. The lattice constants and the band gaps expand with an increase of HF exchange in HDFT. The influence of SOC is found to be significant.
Research output: Contribution to journal › Article › Scientific › peer-review
All-encompassing digitalization and the digital skills gap pressure the current school system to change. Accordingly, to 'digi-jump', the Finnish National Curriculum 2014 (FNC-2014) adds programming to K-12 math. However, we claim that the anticipated addition remains too vague and subtle. Instead, we should take into account education recommendations set by computer science organizations, such as ACM, and define clear learning targets for programming. Correspondingly, the whole math syllabus should be critically viewed in the light of these changes and the feedback collected from SW professionals and educators. These findings reveal an imbalance between supply and demand, i.e., what is over-taught versus under-taught, from the point of view of professional requirements. Critics claim an unnecessary surplus of calculus and differential equations, i.e., continuous mathematics. In contrast, the emphasis should shift more towards algorithms and data structures, flexibility in handling multiple data representations, logic; in summary - discrete mathematics.
EXT="Valmari, Antti"
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Experiencing stress, disturbing interruptions, loss of ability to concentrate, hurry and challenges to meet tight deadlines at work are very common in working life. At the same time, while variety of digital communication channels like instant messaging, video calls and social networking sites are getting more popular in working life, email is still intensively utilized work communication media. The goal of the empirical field study analyzing daily desktop computing of knowledge workers was to analyze association between email intensity in work time spending and subjectively experienced quality of work performance. It was found that while intensive email use does not impair subjectively experienced productivity, it may harm ability to concentrate, may increase forgetfulness and inability to solve problems at work effectively. Copyright is held by the owner/author(s). Publication rights licensed to ACM.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The millimeter-wave (mmWave) bands and other high frequencies above 6 GHz have emerged as a central component of fifth generation cellular standards to deliver high data rates and ultra-low latency. A key challenge in these bands is blockage from obstacles, including the human body. In addition to the reduced coverage, blockage can result in highly intermittent links where the signal quality varies significantly with motion of obstacles in the environment. The blockages have widespread consequences throughout the protocol stack including beam tracking, link adaptation, cell selection, handover, and congestion control. Accurately modeling these blockage dynamics is therefore critical for the development and evaluation of potential mmWave systems. In this work, we present a novel spatial dynamic channel sounding system based on phased array transmitters and receivers operating at 60 GHz. Importantly, the sounder can measure multiple directions rapidly at high speed to provide detailed spatial dynamic measurements of complex scenarios. The system is demonstrated in an indoor home entertainment type setting with multiple moving blockers. Preliminary results are presented on analyzing this data with a discussion of the open issues toward developing statistical dynamic models.
Research output: Contribution to journal › Article › Scientific › peer-review
Dwelling design needs to consider multiple objectives and uncertainties to achieve effective and robust performance. A multi-objective robust optimisation method is outlined and then applied with the aim to optimise a one-story archetype in Delhi to achieve a healthy low-energy design. EnergyPlus is used to model a sample of selected design and uncertainty inputs. Sensitivity analysis identifies significant parameters and a meta-model is constructed to replicate input-output relationships. The meta-model is employed in a hybrid multi-objective optimisation algorithm that accounts for uncertainty. Results demonstrate the complexities of achieving a low energy consumption and healthy indoor environmental quality.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
With the increasing design dimensionality, it is more difficult to solve multidisciplinary design optimization (MDO) problems. Many MDO decomposition strategies have been developed to reduce the dimensionality. Those strategies consider the design problem as a black-box function. However, practitioners usually have certain knowledge of their problem. In this paper, a method leveraging causal graph and qualitative analysis is developed to reduce the dimensionality of the MDO problem by systematically modeling and incorporating the knowledge about the design problem into optimization. Causal graph is created to show the input-output relationships between variables. A qualitative analysis algorithm using design structure matrix (DSM) is developed to automatically find the variables whose values can be determined without resorting to optimization. According to the impact of variables, an MDO problem is divided into two subproblems, the optimization problem with respect to the most important variables, and the other with variables of lower importance. The novel method is used to solve a power converter design problem and an aircraft concept design problem, and the results show that by incorporating knowledge in form of causal relationship, the optimization efficiency is significantly improved.
Research output: Contribution to journal › Article › Scientific › peer-review
We study data transfer links that would enable development of low-cost technologies for increasing safety of general aviation (GA). The solution proposed here is to supplement the existing cmWave solutions with mmWave cellular signals in order to better handle interferences and to reach lower outage probabilities and higher throughputs. Moreover, cellular solutions have the advantage of re-using existing or planned infrastructure, and thus they are expected to require minor additional investments. Our article aims both at shedding some light on the terminology in the GA field and at proposing future viable data-link solutions in GA. We also survey the existing solutions, challenges, and opportunities related to the wireless communication links in GA, and we present several case studies related to the achievable outage probabilities and throughputs under rural and urban scenarios of low-altitude GA vehicles. We conclude that supplementing the existing cmWave wireless links with mmWave wireless connections is a workable solution for affordable communication links for low-altitude GA aircraft.
Research output: Contribution to journal › Article › Scientific › peer-review
We investigate the decidability of the emptiness problem for three classes of distributed automata. These devices operate on finite directed graphs, acting as networks of identical finite-state machines that communicate in an infinite sequence of synchronous rounds. The problem is shown to be decidable in LOGSPACE for a class of forgetful automata, where the nodes see the messages received from their neighbors but cannot remember their own state. When restricted to the appropriate families of graphs, these forgetful automata are equivalent to classical finite word automata, but strictly more expressive than finite tree automata. On the other hand, we also show that the emptiness problem is undecidable in general. This already holds for two heavily restricted classes of distributed automata: those that reject immediately if they receive more than one message per round, and those whose state diagram must be acyclic except for self-loops. Additionally, to demonstrate the flexibility of distributed automata in simulating different models of computation, we provide a characterization of constraint satisfaction problems by identifying a class of automata with exactly the same computational power.
Research output: Contribution to journal › Article › Scientific › peer-review
Green communication and energy saving have been a critical issue in modern wireless communication systems. The concepts of energy harvesting and energy transfer are recently receiving much attention in academic research field. In this paper, we study energy cooperation problems based on save-then-transmit protocol and propose two energy cooperation schemes for different system models: two-node communication model and three-node relay communication model. In both models, all of the nodes transmitting information have no fixed energy supplies and gain energy only via wireless energy harvesting from nature. Besides, these nodes also follow a save-then-transmit protocol. Namely, for each timeslot, a fraction (referred to as save-ratio) of time is devoted exclusively to energy harvesting while the remaining fraction is used for data transmission. In order to maximize the system throughput, energy transfer mechanism is introduced in our schemes, i.e., some nodes are permitted to share their harvested energy with other nodes by means of wireless energy transfer. Simulation results demonstrate that our proposed schemes can outperform both the schemes with half-allocate save-ratio and the schemes without energy transfer in terms of throughput performance, and also characterize the dependencies of system throughput, transferred energy, and save-ratio on energy harvesting rate.
Research output: Contribution to journal › Article › Scientific › peer-review
In this paper, we maximize the energy efficiency (EE) of full-duplex (FD) two-way relay (TWR) systems under non-ideal power amplifiers (PAs) and non-negligible transmission-dependent circuit power. We start with the case where only the relay operates full duplex and two timeslots are required for TWR. Then, we extend to the advanced case, where the relay and the two nodes all operate full duplex, and accomplish TWR in a single timeslot. In both cases, we establish the intrinsic connections between the optimal transmit powers and durations, based on which the original non-convex EE maximization can be convexified and optimally solved. Simulations show the superiority of FD-TWR in terms of EE, especially when traffic demand is high. The simulations also reveal that the maximum EE of FD-TWR is more sensitive to the PA efficiency, than it is to self-cancellation. The full FD design of FD-TWR is susceptible to traffic imbalance, while the design with only the relay operating in the FD mode exhibits strong tolerance.
Research output: Contribution to journal › Article › Scientific › peer-review
While the IoT has made significant progress along the lines of supporting its individual applications, there are many MMTC scenarios in which the performance offered by any single RAT available today might be insufficient. To address these use cases, we introduce the concept of MR-MMTC, which implies the availability and utilization of several RATs within a single IoT device. We begin by offering insights into which use cases could be beneficial and what the key challenges for MR-MMTC implementation are. We continue by discussing the potential technical solutions and employing our own prototype of an MR-MMTC device capable of using LoRaWAN and NB-IoT RATs to characterize its energy-centric performance across the alternative feasible MR-MMTC implementation strategies. The obtained results reveal that an increased flexibility delivered by MR-MMTC permits the selection of more energy-efficient RAT options. The IoT devices capable of utilizing multiple radios simultaneously can thus improve their energy utilization by leveraging the synergy between RATs. The novel vision of MR-MMTC outlined in this work could be impactful across multiple fields, and calls for cross-community research efforts in order to adequately design, implement, and deploy future multi-RAT MMTC solutions.
Research output: Contribution to journal › Article › Scientific › peer-review
In addition to high-precision closed-loop control performance, energy efficiency is another vital characteristic in field-robotic hydraulic systems as energy source(s) must be carried on board in limited space. This study proposes an energy-efficient and high-precision closed-loop controller for the highly nonlinear hydraulic robotic manipulators. The proposed method is twofold: 1) A possibility for energy consumption reduction is realized by using a separate meter-in separate meter-out (SMISMO) control set-up, enabling an independent metering (pressure control) of each chamber in hydraulic actuators. 2) A novel subsystem-dynamics-based and modular controller is designed for the system actuators, and it is integrated to the previously designed state-of-the-art controller for multiple degrees-of-freedom (n-DOF) manipulators. Stability of the overall controller is rigorously proven. The comparative experiments with a three-DOF redundant hydraulic robotic manipulator (with a payload of 475 kg) demonstrate that: 1) It is possible to design the triple objective of high-precision piston position, piston force and chamber pressure trackings for the hydraulic actuators. 2) In relation to the previous SMISMO-control methods, unprecedented motion and chamber pressure tracking performances are reported. 3) In comparison to the state-of-the-art motion tracking controller with a conventional energy-inefficient servovalve control, the actuators’ energy consumption is reduced by 45% without noticeable motion control (position-tracking) deterioration.
Research output: Contribution to journal › Article › Scientific › peer-review
Well-being has emerged as the new ‘green’ for buildings thought to reward occupiers, property owners, developers and other concerned actors. The new assessment tools for well-being are seen as the next step of currently widely used ‘traditional’ sustainability tools. However, a lack of knowledge globally about these tools, their compatibility and general adoption in the market due to the newness of the topic inspired this study. In the research, we aim at developing a deeper understanding of the well-being and social sustainability perspective as an innovation in relation to the built environment. The study consists of a literature review, a desktop study of sustainability and well-being rating tools and a qualitative interview data-based research on stakeholders’ position regarding the WELL-certificate adoption in the market. Lastly, the conclusions are drawn based on the results of empirical and desktop study. The results of this research benefit the scientific community by providing a better understanding of the well-being approach in the market and points out the areas of interest for further research. Practitioners can benefit from a deeper understanding of market adoption of well-being assessment tools and the development of sustainability concept in the built environment.
EXT="Danivska, Vitalija"
Research output: Contribution to journal › Article › Scientific › peer-review
Novel composite fading models were recently proposed based on inverse gamma distributed shadowing conditions. These models were extensively shown to provide remarkable modeling of the simultaneous occurrence of multipath fading and shadowing phenomena in emerging wireless scenarios such as cellular, off-body and vehicle-to-vehicle communications. Furthermore, the algebraic representation of these models is rather tractable, which renders them convenient to handle both analytically and numerically. Based on this, the present contribution analyzes the ergodic capacity over the recently proposed κ-μ inverse gamma composite fading channels, which were shown to characterize excellently multipath fading and shadowing in line-of-sight communication scenarios, including realistic vehicular communications. Novel analytic expressions are derived which are subsequently used in the analysis of the corresponding system performance. In this context, the offered results are compared with respective results from cases assuming conventional fading conditions, which leads to the development of numerous insights on the effect of the multipath fading and shadowing severity on the achieved capacity levels. It is expected that these results will be useful in the design of timely and demanding wireless technologies such as wearable, cellular and inter-vehicular communications.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
The present contribution analyzes the performance of non-orthogonal multiple access (NOMA)-based user cooperation with simultaneous wireless information and power transfer (SWIPT). In particular, we consider a two-user NOMA-based cooperative SWIPT scenario, in which the near user acts as a SWIPT-enabled relay that assists the farthest user. In this context, we derive analytic expressions for the pairwise error probability (PEP) of both users assuming the both amplify-and-forward (AF) and decode-and-forward (DF) relay protocols. The derived expressions are expressed in closed-form and have a tractable algebraic representation which renders them convenient to handle both analytically and numerically. In addition to this, we derive a simple asymptotic closed-form expression for the PEP in the high signal-to-noise ratio (SNR) regime which provide useful insights on the impact of the involved parameters on the overall system performance. Capitalizing on this, we subsequently quantify the maximum achievable diversity order of both users. It is shown that numerical and simulation results corroborate the derived analytic expressions. Furthermore, the offered results provide interesting insights into the error rate performance of each user, which are expected to be useful in future designs and deployments of NOMA based SWIPT systems.
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Purpose - Punching of the electrical sheets impair the insulation and make random galvanic contacts between the edges of the sheets. The purpose of this paper is to model the random galvanic contacts at the stator edges of 37 kW induction machine and estimate the additional losses due to these contacts. Design/methodology/approach - The presence of the surface current at the edges of sheets causes the discontinuity in the tangential component of the magnetic field. The surface boundary layer model which is based on this concept is implemented to model the galvanic contacts at the edges of the sheets. Finite element analysis based on magnetic vector potential was done and theoretical statistical study of the random conductivity at the stator edge was performed using brute force method. Findings - Finite element analysis validates the interlaminar current when galvanic contacts are present at the edges of electrical sheets. The case studies show that the rotor and stator losses increases with the thickness of the contacts. Statistical studies show that the mean value of total electromagnetic loss was increased by 7.7 percent due to random contacts at the edges of sheets. Originality/value - The novel approach for modeling the galvanic contacts at the stator edges of induction machine is discussed in this paper. The hypothesis of interlaminar current due to galvanic contacts is also validated using finite element simulation.