TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

Optimizing spatial and temporal reuse in wireless networks by decentralized partially observable markov decision processes

Tutkimustuotosvertaisarvioitu

Yksityiskohdat

AlkuperäiskieliEnglanti
Artikkeli6482133
Sivut866-879
Sivumäärä14
JulkaisuIEEE Transactions on Mobile Computing
Vuosikerta13
Numero4
DOI - pysyväislinkit
TilaJulkaistu - huhtikuuta 2014
OKM-julkaisutyyppiA1 Alkuperäisartikkeli

Tiivistelmä

The performance of medium access control (MAC) depends on both spatial locations and traffic patterns of wireless agents. In contrast to conventional MAC policies, we propose a MAC solution that adapts to the prevailing spatial and temporal opportunities. The proposed solution is based on a decentralized partially observable Markov decision process (DEC-POMDP), which is able to handle wireless network dynamics described by a Markov model. A DEC-POMDP takes both sensor noise and partial observations into account, and yields MAC policies that are optimal for the network dynamics model. The DEC-POMDP MAC policies can be optimized for a freely chosen goal, such as maximal throughput or minimal latency, with the same algorithm. We make approximate optimization efficient by exploiting problem structure: the policies are optimized by a factored DEC-POMDP method, yielding highly compact state machine representations for MAC policies. Experiments show that our approach yields higher throughput and lower latency than CSMA/CA based comparison methods adapted to the current wireless network configuration.