Please use this identifier to cite or link to this item: http://hdl.handle.net/2080/4133
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKumar, Praveen-
dc.contributor.authorHota, Lopamudra-
dc.contributor.authorNayak, Biraja Prasad-
dc.contributor.authorKumar, Arun-
dc.date.accessioned2023-12-18T04:42:31Z-
dc.date.available2023-12-18T04:42:31Z-
dc.date.issued2023-11-
dc.identifier.citationInternational Conference on Machine Learning and Data Engineering (ICMLDE), UPES, Dehradun, India, 23-24 November 2023en_US
dc.identifier.urihttp://hdl.handle.net/2080/4133-
dc.descriptionCopyright belongs to proceeding publisheren_US
dc.description.abstractThe IEEE 802.11p standard’s collision avoidance mechanism is sub-optimal due to the use of a Binary Exponential Backoff (BEB) algorithm in the Medium Access Control (MAC) layer. This algorithm increases the backoff interval upon collision detection to reduce the probability of subsequent collisions. However, it causes a degradation of radio spectrum utilization, leading to bandwidth wastage, especially when managing access to a dense number of stations. An incorrect backoff setting can also increase the likelihood of collisions and decrease network performance. To solve this optimization problem, the proposed model is based on Reinforcement Learning (RL) Actor-Critic (AC) network to adapt the CW value by learning the VANET environment. NS-3 network simulator with the NS3-gym module for VANET is utilized to test and compute the optimal CW values. The results show that the proposed AC and EAC outperform traditional BEB used in the IEEE 802.11p standard as the number of vehicles increases. The throughput is enhanced by 54% and 69% respectively for AC and Enhanced AC (EAC). Similarly, the end-to-end delay for AC is reduced by 19%, and for EAC it’s reduced by 25%.en_US
dc.subjectContention Windowen_US
dc.subjectVANETen_US
dc.subjectIEEE 802.11pen_US
dc.subjectActor Criticen_US
dc.subjectReinforcement Learningen_US
dc.subjectMACen_US
dc.titleAn Adaptive Contention Window using Actor-Critic Reinforcement Learning Algorithm for Vehicular Ad-hoc NETworksen_US
dc.typeArticleen_US
Appears in Collections:Conference Papers

Files in This Item:
File Description SizeFormat 
2023_ICMLDE_PKumar_AnAdaptive.pdf444.68 kBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.