中国科学院光电技术研究所机构知识库
Advanced  
IOE OpenIR  > 自适应光学技术研究室(八室)  > 会议论文
题名:
Depth estimation based on Adaptive Support-Weight and SIFT for multi-lenslet cameras
作者: Gao, Yuan1,2,3; Liu, Wenjin2,3; Yang, Ping2; Xu, Bing2
出版日期: 2012
会议名称: Proceedings of SPIE: 6th International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optoelectronic Materials and Devices for Sensing, Imaging, and Solar Energy
会议日期: 2012
DOI: 10.1117/12.975694
通讯作者: Gao, Y. (gaoyuan.22111@yahoo.com.cn)
中文摘要: With a multi-lenslet camera, we can capture multiple low resolution subimages of the same scene and use them to reconstruct a high resolution image. The spatially variant shifts estimation between subimages is one of major problems. In this paper, a depth estimation algorithm has been proposed for multi-lenslet cameras. The stereo matching between the reference subimage and other subimages using segmentation-based Adaptive Support-Weight approach combined with Scale Invariant Feature Transform (SIFT) is introduced, which has an influence on the result of stereo matching. Then, disparity maps are converted to depth maps and these depth maps are merged into one map for quality improvement. At last, the average blending images at difference depth are calculated according to the depth map. The experimental results show that the proposed algorithm can extract accurate depth more concisely and efficiently. © 2012 SPIE.
英文摘要: With a multi-lenslet camera, we can capture multiple low resolution subimages of the same scene and use them to reconstruct a high resolution image. The spatially variant shifts estimation between subimages is one of major problems. In this paper, a depth estimation algorithm has been proposed for multi-lenslet cameras. The stereo matching between the reference subimage and other subimages using segmentation-based Adaptive Support-Weight approach combined with Scale Invariant Feature Transform (SIFT) is introduced, which has an influence on the result of stereo matching. Then, disparity maps are converted to depth maps and these depth maps are merged into one map for quality improvement. At last, the average blending images at difference depth are calculated according to the depth map. The experimental results show that the proposed algorithm can extract accurate depth more concisely and efficiently. © 2012 SPIE.
收录类别: Ei
语种: 英语
卷号: 8419
ISSN号: 0277786X
文章类型: 会议论文
页码: 84190C
Citation statistics:
内容类型: 会议论文
URI标识: http://ir.ioe.ac.cn/handle/181551/7787
Appears in Collections:自适应光学技术研究室(八室)_会议论文

Files in This Item:
File Name/ File Size Content Type Version Access License
2012-2147.pdf(359KB)会议论文--限制开放View 联系获取全文

作者单位: 1. Laboratory on Adaptive Optics, Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
2. Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China
3. Graduate School of Chinese Academy of Sciences, Beijing 100049, China

Recommended Citation:
Gao, Yuan,Liu, Wenjin,Yang, Ping,et al. Depth estimation based on Adaptive Support-Weight and SIFT for multi-lenslet cameras[C]. 见:Proceedings of SPIE: 6th International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optoelectronic Materials and Devices for Sensing, Imaging, and Solar Energy. 2012.
Service
Recommend this item
Sava as my favorate item
Show this item's statistics
Export Endnote File
Google Scholar
Similar articles in Google Scholar
[Gao, Yuan]'s Articles
[Liu, Wenjin]'s Articles
[Yang, Ping]'s Articles
CSDL cross search
Similar articles in CSDL Cross Search
[Gao, Yuan]‘s Articles
[Liu, Wenjin]‘s Articles
[Yang, Ping]‘s Articles
Related Copyright Policies
Null
Social Bookmarking
Add to CiteULike Add to Connotea Add to Del.icio.us Add to Digg Add to Reddit
文件名: 2012-2147.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 
评注功能仅针对注册用户开放,请您登录
您对该条目有什么异议,请填写以下表单,管理员会尽快联系您。
内 容:
Email:  *
单位:
验证码:   刷新
您在IR的使用过程中有什么好的想法或者建议可以反馈给我们。
标 题:
 *
内 容:
Email:  *
验证码:   刷新

Items in IR are protected by copyright, with all rights reserved, unless otherwise indicated.

 

 

Valid XHTML 1.0!
Copyright © 2007-2016  中国科学院光电技术研究所 - Feedback
Powered by CSpace