Tampere University of Technology

TUTCRIS Research Portal

Optimizing spatial and temporal reuse in wireless networks by decentralized partially observable markov decision processes

Research output: Contribution to journalArticleScientificpeer-review

Details

Original languageEnglish
Article number6482133
Pages (from-to)866-879
Number of pages14
JournalIEEE Transactions on Mobile Computing
Volume13
Issue number4
DOIs
Publication statusPublished - Apr 2014
Publication typeA1 Journal article-refereed

Abstract

The performance of medium access control (MAC) depends on both spatial locations and traffic patterns of wireless agents. In contrast to conventional MAC policies, we propose a MAC solution that adapts to the prevailing spatial and temporal opportunities. The proposed solution is based on a decentralized partially observable Markov decision process (DEC-POMDP), which is able to handle wireless network dynamics described by a Markov model. A DEC-POMDP takes both sensor noise and partial observations into account, and yields MAC policies that are optimal for the network dynamics model. The DEC-POMDP MAC policies can be optimized for a freely chosen goal, such as maximal throughput or minimal latency, with the same algorithm. We make approximate optimization efficient by exploiting problem structure: the policies are optimized by a factored DEC-POMDP method, yielding highly compact state machine representations for MAC policies. Experiments show that our approach yields higher throughput and lower latency than CSMA/CA based comparison methods adapted to the current wireless network configuration.

Keywords

  • decentralized POMDP, medium access control, multi-agent planning, Spatial reuse, wireless network