Please use this identifier to cite or link to this item:
http://hdl.handle.net/2080/3755
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Deshmukh, Prashant | - |
dc.contributor.author | Kadha, Vijayakumar | - |
dc.contributor.author | Rayasam, Krishna Chaitanya | - |
dc.contributor.author | Das, Santos Kumar | - |
dc.date.accessioned | 2022-10-20T04:44:31Z | - |
dc.date.available | 2022-10-20T04:44:31Z | - |
dc.date.issued | 2022-09 | - |
dc.identifier.citation | International Conference on Advances in Data-driven Computing and Intelligent Systems (ADCIS 2022) ,September 23-25, 2022 at BITS Goa Campus. | en_US |
dc.identifier.uri | http://hdl.handle.net/2080/3755 | - |
dc.description | Copyright belongs to proceeding publisher | en_US |
dc.description.abstract | Traffic camera video feeds are helpful in implementing intelligent vehicle detection and classification (IVDC). It has various applications in the transportation engineering domain, such as queue length estimation, vehicle tracking, traffic parameters estimation etc. However, in the Indian traffic wide variety of vehicles (motorbikes, auto-rickshaws, cycle-rickshaws, minitrucks, trucks etc.) travel on the road. They do not follow lane disciplined and occluded each other, making vehicle detection very challenging. This work presented an anchor-free object detection model (YOLOX) on the Indian traffic dataset (ITD) and compared it with the existing object detection models. It achieves 88% mean average precision (mAP) and 37 frames per second (FPS) on ITD. | en_US |
dc.subject | Indian traffic | en_US |
dc.subject | vehicle detection | en_US |
dc.subject | deep learning | en_US |
dc.subject | anchor-free object detection | en_US |
dc.title | Vehicle detection in Indian traffic using an anchor-free object detector | en_US |
Appears in Collections: | Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
DasS_SCRC2022.pdf | 10.63 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.