Please use this identifier to cite or link to this item:
http://hdl.handle.net/2080/4052
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Goswami, Shroddha | - |
dc.contributor.author | Ashwini, K | - |
dc.contributor.author | Dash, Ratnakar | - |
dc.date.accessioned | 2023-08-02T04:54:39Z | - |
dc.date.available | 2023-08-02T04:54:39Z | - |
dc.date.issued | 2023-07 | - |
dc.identifier.citation | 14th International Conference on Computing, Communication and Networking Technologies (ICCCNT), IIT Delhi, Delhi India, 6th-8th July 2023 | en_US |
dc.identifier.uri | http://hdl.handle.net/2080/4052 | - |
dc.description | Copyright belongs to proceeding publisher | en_US |
dc.description.abstract | Diabetic Retinopathy (DR) is a leading cause of blindness in people suffering from Diabetes Mellitus. The biggest challenge with the detection of DR is that it is very difficult for ophthalmologists to detect it early, and it is irreversible. To tackle this problem, deep learning methods have been used to automate the detection and help ophthalmologists. In this paper, iterative attentional feature fusion (iAFF) has been used. iAFF is an attention model which gives more importance to the features, which will help in grading the disease better and creating a better model. It works in an ensemble model with modified InceptionV3 and Xception to give better results than the pre-existing models. The proposed model gives an accuracy of 73% on the IDRiD data set. | en_US |
dc.subject | Diabetic Retinopathy | en_US |
dc.subject | Fundus images | en_US |
dc.subject | Deep Learning | en_US |
dc.subject | iAFF | en_US |
dc.subject | IDRiD | en_US |
dc.title | Grading of Diabetic Retinopathy using iterative Attentional Feature Fusion (iAFF) | en_US |
dc.type | Article | en_US |
Appears in Collections: | Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2023_ICCCNT_SGoswami_Grading.pdf | 5.33 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.