IOE OpenIR  > 光电探测与信号处理研究室(五室)
Bottom-up attention based on C1 features of HMAX model
Yu, Huapeng1,2,3; Xu, Zhiyong1; Fu, Chengyu1; Wang, Yafei2; Yu, H. (musicfish1973@qq.com)
Volume8558
Pages85580W
2012
Language英语
ISSN0277786X
DOI10.1117/12.999263
Indexed ByEi
Subtype会议论文
AbstractThis paper presents a novel bottom-up attention model only based on C1 features of HMAX model, which is efficient and consistent. Although similar orientation-based features are commonly used by most bottom-up attention models, we adopt different activation and combination approaches to get the ultimate map. We compare the two different operations for activation and combination, i.e. MAX and SUM, and we argue they are often complementary. Then we argue that for a general object recognition system the traditional evaluation rule, which is the accordance with human fixations, is inappropriate. We suggest new evaluation rules and approaches for bottom-up attention models, which focus on information unloss rate and useful rate relative to the labeled attention area. We formally define unloss rate and useful rate, and find efficient algorithm to compute them from the original labeled and output attention area. Also we discard the commonly adopted center-surround assumption for bottom-up attention models. Comparing with GBVS based on the suggested evaluation rules and approaches on complex street scenes, we show excellent performance of our model. © Copyright SPIE.; This paper presents a novel bottom-up attention model only based on C1 features of HMAX model, which is efficient and consistent. Although similar orientation-based features are commonly used by most bottom-up attention models, we adopt different activation and combination approaches to get the ultimate map. We compare the two different operations for activation and combination, i.e. MAX and SUM, and we argue they are often complementary. Then we argue that for a general object recognition system the traditional evaluation rule, which is the accordance with human fixations, is inappropriate. We suggest new evaluation rules and approaches for bottom-up attention models, which focus on information unloss rate and useful rate relative to the labeled attention area. We formally define unloss rate and useful rate, and find efficient algorithm to compute them from the original labeled and output attention area. Also we discard the commonly adopted center-surround assumption for bottom-up attention models. Comparing with GBVS based on the suggested evaluation rules and approaches on complex street scenes, we show excellent performance of our model. © Copyright SPIE.
Conference NameProceedings of SPIE: Optoelectronic Imaging and Multimedia Technology II
Conference Date2012
Citation statistics
Document Type会议论文
Identifierhttp://ir.ioe.ac.cn/handle/181551/7698
Collection光电探测与信号处理研究室(五室)
Corresponding AuthorYu, H. (musicfish1973@qq.com)
Affiliation1. Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
2. School of Optoelectronic Information, University of Electronic Science and Technology of China, Chengdu 610054, China
3. Graduate University, Chinese Academy of Sciences, Beijing 100039, China
Recommended Citation
GB/T 7714
Yu, Huapeng,Xu, Zhiyong,Fu, Chengyu,et al. Bottom-up attention based on C1 features of HMAX model[C],2012:85580W.
Files in This Item:
File Name/Size DocType Version Access License
2012-2172.pdf(806KB)会议论文 开放获取CC BY-NC-SAApplication Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Yu, Huapeng]'s Articles
[Xu, Zhiyong]'s Articles
[Fu, Chengyu]'s Articles
Baidu academic
Similar articles in Baidu academic
[Yu, Huapeng]'s Articles
[Xu, Zhiyong]'s Articles
[Fu, Chengyu]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Yu, Huapeng]'s Articles
[Xu, Zhiyong]'s Articles
[Fu, Chengyu]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.