Please use this identifier to cite or link to this item:
http://hdl.handle.net/2080/4133
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kumar, Praveen | - |
dc.contributor.author | Hota, Lopamudra | - |
dc.contributor.author | Nayak, Biraja Prasad | - |
dc.contributor.author | Kumar, Arun | - |
dc.date.accessioned | 2023-12-18T04:42:31Z | - |
dc.date.available | 2023-12-18T04:42:31Z | - |
dc.date.issued | 2023-11 | - |
dc.identifier.citation | International Conference on Machine Learning and Data Engineering (ICMLDE), UPES, Dehradun, India, 23-24 November 2023 | en_US |
dc.identifier.uri | http://hdl.handle.net/2080/4133 | - |
dc.description | Copyright belongs to proceeding publisher | en_US |
dc.description.abstract | The IEEE 802.11p standard’s collision avoidance mechanism is sub-optimal due to the use of a Binary Exponential Backoff (BEB) algorithm in the Medium Access Control (MAC) layer. This algorithm increases the backoff interval upon collision detection to reduce the probability of subsequent collisions. However, it causes a degradation of radio spectrum utilization, leading to bandwidth wastage, especially when managing access to a dense number of stations. An incorrect backoff setting can also increase the likelihood of collisions and decrease network performance. To solve this optimization problem, the proposed model is based on Reinforcement Learning (RL) Actor-Critic (AC) network to adapt the CW value by learning the VANET environment. NS-3 network simulator with the NS3-gym module for VANET is utilized to test and compute the optimal CW values. The results show that the proposed AC and EAC outperform traditional BEB used in the IEEE 802.11p standard as the number of vehicles increases. The throughput is enhanced by 54% and 69% respectively for AC and Enhanced AC (EAC). Similarly, the end-to-end delay for AC is reduced by 19%, and for EAC it’s reduced by 25%. | en_US |
dc.subject | Contention Window | en_US |
dc.subject | VANET | en_US |
dc.subject | IEEE 802.11p | en_US |
dc.subject | Actor Critic | en_US |
dc.subject | Reinforcement Learning | en_US |
dc.subject | MAC | en_US |
dc.title | An Adaptive Contention Window using Actor-Critic Reinforcement Learning Algorithm for Vehicular Ad-hoc NETworks | en_US |
dc.type | Article | en_US |
Appears in Collections: | Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2023_ICMLDE_PKumar_AnAdaptive.pdf | 444.68 kB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.