3-D object detections and recognitions: assisting visually impaired people

In this chapter, we firstly present surveys on assisted (or aided) systems for VIPs in

Sec. 1.1. The related systems are categorized into three groups: Navigation services,

obstacle detections, and positioning the interested objects in a scene. The related

works on detecting 3-D objects in indoor environment are surveyed in Sec. 1.2. In this

section, we will roughly introduce and analyze the state-of-the-art 3-D object detection,

recognition techniques. The readers also can refer detailed approaches in Chapter 3.

Finally, in Sec. 1.3, we concentrically survey on the fitting techniques using robust

estimator algorithms and their applications in robotics and computer vision.

pdf 159 trang dienloan 16040
Bạn đang xem 20 trang mẫu của tài liệu "3-D object detections and recognitions: assisting visually impaired people", để tải tài liệu gốc về máy hãy click vào nút Download ở trên

Tóm tắt nội dung tài liệu: 3-D object detections and recognitions: assisting visually impaired people

3-D object detections and recognitions: assisting visually impaired people
HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY
LE VAN HUNG
3-D OBJECT DETECTIONS AND
RECOGNITIONS: ASSISTING VISUALLY
IMPAIRED PEOPLE
Major: Computer Science
Code: 9480101
DOCTORAL DISSERTATION OF
COMPUTER SCIENCE
SUPERVISORS:
1. Dr. Vu Hai
2. Assoc. Prof. Dr. Nguyen Thi Thuy
Hanoi  2018
HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY
LE VAN HUNG
3-D OBJECT DETECTIONS AND
RECOGNITIONS: ASSISTING VISUALLY
IMPAIRED PEOPLE
Major: Computer Science
Code: 9480101
DOCTORAL DISSERTATION OF
COMPUTER SCIENCE
SUPERVISORS:
1. Dr. Vu Hai
2. Assoc. Prof. Dr. Nguyen Thi Thuy
Hanoi  2018
DECLARATION OF AUTHORSHIP
I, Le Van Hung, declare that this dissertation titled, "3-D Object Detections and
Recognitions: Assisting Visually Impaired People in Daily Activities ", and the works
presented in it are my own. I conrm that:
 This work was done wholly or mainly while in candidature for a Ph.D. research
degree at Hanoi University of Science and Technology.
 Where any part of this thesis has previously been submitted for a degree or any
other qualication at Hanoi University of Science and Technology or any other
institution, this has been clearly stated.
 Where I have consulted the published work of others, this is always clearly at-
tributed.
 Where I have quoted from the work of others, the source is always given. With
the exception of such quotations, this dissertation is entirely my own work.
 I have acknowledged all main sources of help.
 Where the dissertation is based on work done by myself jointly with others, I
have made exactly what was done by others and what I have contributed myself.
Hanoi, November 2018
PhD Student
Le Van Hung
SUPERVISORS
Dr. Vu Hai Assoc. Prof. Dr. Nguyen Thi Thuy
i
ACKNOWLEDGEMENT
This dissertation was written during my doctoral course at International Research
Institute Multimedia, Information, Communication and Applications (MICA), Hanoi
University of Science and Technology (HUST). It is my great pleasure to thank all the
people who supported me for completing this work.
First, I would like to express my sincere gratitude to my advisors Dr. Hai Vu
and Assoc. Prof. Dr. Thi Thuy Nguyen for their continuous support, their patience,
motivation, and immense knowledge. Their guidance helped me all the time of research
and writing this dissertation. I could not imagine a better advisor and mentor for my
Ph.D. study.
Besides my advisors, I would like to thank to Assoc. Prof. Dr. Thi-Lan Le,
Assoc. Prof. Dr. Thanh-Hai Tran and members of Computer Vision Department at
MICA Institute. The colleagues have assisted me a lot in my research process as well
as they are co-authored in the published papers. Moreover, the attention at scientic
conferences has always been a great experience for me to receive many the useful
comments.
During my PhD course, I have received many supports from the Management
Board of MICA Institute. My sincere thank to Prof. Yen Ngoc Pham, Prof. Eric
Castelli and Dr. Son Viet Nguyen, who gave me the opportunity to join research
works, and gave me permission to joint to the laboratory in MICA Institute. Without
their precious support, it has been being impossible to conduct this research.
As a Ph.D. student of 911 program, I would like to thank this programme for
nancial support. I also gratefully acknowledge the nancial support for attending
the conferences from Nafosted-FWO project (FWO.102.2013.08) and VLIR project
(ZEIN2012RIP19). I would like to thank the College of Statistics over the years both
at my career work and outside of the work.
Special thanks to my family, particularly, to my mother and father for all of their
sacrices that they have made on my behalf. I also would like to thank my beloved
wife for everything she supported me.
Hanoi, November 2018
Ph.D. Student
Le Van Hung
ii
CONTENTS
DECLARATION OF AUTHORSHIP i
ACKNOWLEDGEMENT ii
CONTENTS v
SYMBOLS vi
LIST OF TABLES viii
LIST OF FIGURES xvii
1 LITERATURE REVIEW 8
1.1 Aided-systems for supporting visually impaired people . . . . . . . . . 8
1.1.1 Aided-systems for navigation services . . . . . . . . . . . . . . . 8
1.1.2 Aided-systems for obstacle detection . . . . . . . . . . . . . . . 9
1.1.3 Aided-systems for locating the interested objects in scenes . . . 11
1.1.4 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2 3-D object detection, recognition from a point cloud data . . . . . . . . 13
1.2.1 Appearance-based methods . . . . . . . . . . . . . . . . . . . . 13
1.2.1.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2.2 Geometry-based methods . . . . . . . . . . . . . . . . . . . . . . 16
1.2.3 Datasets for 3-D object recognition . . . . . . . . . . . . . . . . 17
1.2.4 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3 Fitting primitive shapes . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.3.1 Linear tting algorithms . . . . . . . . . . . . . . . . . . . . . . 18
1.3.2 Robust estimation algorithms . . . . . . . . . . . . . . . . . . . 19
1.3.3 RANdom SAmple Consensus (RANSAC) and its variations . . . 20
1.3.4 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2 POINT CLOUD REPRESENTATION AND THE PROPOSED METHOD
FOR TABLE PLANE DETECTION 24
2.1 Point cloud representations . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.1.1 Capturing data by a Microsoft Kinect sensor . . . . . . . . . . . 24
2.1.2 Point cloud representation . . . . . . . . . . . . . . . . . . . . . 25
2.2 The proposed method for table plane detection . . . . . . . . . . . . . 28
2.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
iii
2.2.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.3 The proposed method . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.3.1 The proposed framework . . . . . . . . . . . . . . . . . 30
2.2.3.2 Plane segmentation . . . . . . . . . . . . . . . . . . . . 32
2.2.3.3 Table plane detection and extraction . . . . . . . . . . 34
2.2.4 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.4.1 Experimental setup and dataset collection . . . . . . . 36
2.2.4.2 Table plane detection evaluation method . . . . . . . . 37
2.2.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3 Separating the interested objects on the table plane . . . . . . . . . . . 46
2.3.1 Coordinate system transformation . . . . . . . . . . . . . . . . . 46
2.3.2 Separating table plane and the interested objects . . . . . . . . 48
2.3.3 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3 PRIMITIVE SHAPES ESTIMATION BY A NEW ROBUST ES-
TIMATOR USING GEOMETRICAL CONSTRAINTS 51
3.1 Fitting primitive shapes by GCSAC . . . . . . . . . . . . . . . . . . . . 52
3.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.1.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.1.3 The proposed a new robust estimator . . . . . . . . . . . . . . . 55
3.1.3.1 Overview of the proposed robust estimator (GCSAC) . 55
3.1.3.2 Geometrical analyses and constraints for qualifying good
samples . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.1.4 Experimental results of robust estimator . . . . . . . . . . . . . 64
3.1.4.1 Evaluation datasets of robust estimator . . . . . . . . 64
3.1.4.2 Evaluation measurements of robust estimator . . . . . 67
3.1.4.3 Evaluation results of a new robust estimator . . . . . . 68
3.1.5 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.2 Fitting objects using the context and geometrical constraints . . . . . . 76
3.2.1 The proposed method of nding objects using the context and
geometrical constraints . . . . . . . . . . . . . . . . . . . . . . . 77
3.2.1.1 Model verication using contextual constraints . . . . 77
3.2.2 Experimental results of nding objects using the context and
geometrical constraints . . . . . . . . . . . . . . . . . . . . . . . 78
3.2.2.1 Descriptions of the datasets for evaluation . . . . . . . 78
3.2.2.2 Evaluation measurements . . . . . . . . . . . . . . . . 81
3.2.2.3 Results of nding objects using the context and geo-
metrical constraints . . . . . . . . . . . . . . . . . . . 82
3.2.3 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
iv
4 DETECTION AND ESTIMATION OF A 3-D OBJECT MODEL
FOR A REAL APPLICATION 86
4.1 A Comparative study on 3-D object detection . . . . . . . . . . . . . . 86
4.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.1.3 Three dierent approaches for 3-D objects detection in a complex
scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.1.3.1 Geometry-based method for Primitive Shape detection
Method (PSM) . . . . . . . . . . . . . . . . . . . . . 90
4.1.3.2 Combination of Clustering objects and Viewpoint Features
Histogram, GCSAC for estimating 3-D full object mod-
els (CVFGS) . . . . . . . . . . . . . . . . . . . . . . . 91
4.1.3.3 Combination of Deep Learning based and GCSAC for
estimating 3-D full object models (DLGS) . . . . . . . 93
4.1.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.1.4.1 Data collection . . . . . . . . . . . . . . . . . . . . . . 95
4.1.4.2 Evaluation method . . . . . . . . . . . . . . . . . . . . 98
4.1.4.3 Setup parameters in the evaluations . . . . . . . . . . 101
4.1.4.4 Evaluation results . . . . . . . . . . . . . . . . . . . . 102
4.1.5 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.2 Deploying an aided-system for visually impaired people . . . . . . . . . 109
4.2.1 Environment and material setup for the evaluation . . . . . . . 111
4.2.2 Pre-built script . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.2.3 Performances of the real system . . . . . . . . . . . . . . . . . . 114
4.2.3.1 Evaluation of nding 3-D objects . . . . . . . . . . . . 115
4.2.4 Evaluation of usability and discussion . . . . . . . . . . . . . . . 118
5 CONCLUSION AND FUTURE WORKS 121
5.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.2 Future works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Bibliography 125
PUBLICATIONS 139
v
ABBREVIATIONS
No. Abbreviation Meaning
1 API Application Programming Interface
2 CNN Convolution Neural Network
2 CPU Central Processing Unit
3 CVFH Clustered Viewpoint Feature Histogram
4 FN False Negative
5 FP False Positive
6 FPFH Fast Point Feature Histogram
7 fps frame per second
8 GCSAC Geometrical Constraint SAmple Consensus
9 GPS Global Positioning System
10 GT Ground Truth
11 HT Hough Transform
12 ICP Iterative Closest Point
13 ISS Intrinsic Shape Signatures
14 JI Jaccard Index
15 KDES Kernel DEScriptors
16 KNN K Nearest Neighbors
17 LBP Local Binary Patterns
18 LMNN Large Margin Nearest Neighbor
19 LMS Least Mean of Squares
20 LO-RANSAC Locally Optimized RANSAC
21 LRF Local Receptive Fields
22 LSM Least Squares Method
23 MAPSAC Maximum A Posteriori SAmple Consensus
24 MLESAC Maximum Likelihood Estimation SAmple Consensus
25 MS MicroSoft
26 MSAC M-estimator SAmple Consensus
27 MSI Modied Plessey
28 MSS Minimal Sample Set
29 NAPSAC N-Adjacent Points SAmple Consensus
vi
30 NARF Normal Aligned Radial Features
31 NN Nearest Neighbor
32 NNDR Nearest Neighbor Distance Ratio
33 OCR Optical Character Recognition
34 OPENCV OPEN source Computer Vision Library
35 PC Persional Computer
36 PCA Principal Component Analysis
37 PCL Point Cloud Library
38 PROSAC PROgressive SAmple Consensus
39 QR code Quick Response Code
40 RAM Random Acess Memory
41 RANSAC RANdom SAmple Consensus
42 RFID Radio-Frequency IDentication
43 R-RANSAC Recursive RANdom SAmple Consensus
44 SDK Software Development Kit
45 SHOT Signature of Histograms of OrienTations
46 SIFT Scale-Invariant Feature Transform
47 SQ SuperQuadric
48 SURF Speeded Up Robust Features
49 SVM Support Vector Machine
50 TN True Negative
51 TP True Positive
52 TTS Text To Speech
53 UPC Universal Product Code
54 URL Uniform Resource Locator
55 USAC A Universal Framework for Random SAmple Consensus
56 VFH Viewpoint Feature Histogram
57 VIP Visually Impaired Person
57 VIPs Visually Impaired People
vii
LIST OF TABLES
Table 2.1 The number of frames of each scene. . . . . . . . . . . . . . . . . 36
Table 2.2 The average result of detected table plane on our own dataset(%). 41
Table 2.3 The average result of detected table plane on the dataset [117] (%). 43
Table 2.4 The average result of detected table plane of our method with
dierent down sampling factors on our dataset. . . . . . . . . . . . . . 44
Table 3.1 The characteristics of the generated cylinder, sphere, cone dataset
(synthesized dataset) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Table 3.2 The average evaluation results of synthesized datasets. The syn-
thesized datasets were repeated 50 times for statistically representative
results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Table 3.3 Experimental results on the 'second cylinder' dataset. The exper-
iments were repeated 20 times, then errors are averaged. . . . . . . . . 75
Table 3.4 The average evaluation results on the 'second sphere', 'second
cone' datasets. The real datasets were repeated 20 times for statistically
representative results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Table 3.5 Average results of the evaluation measurements using GCSAC and
MLESAC on three datasets. The tting procedures were repeated 50
times for statistical evaluations. . . . . . . . . . . . . . . . . . . . . . . 83
Table 4.1 The average result detecting spherical objects on two stages. . . . 102
Table 4.2 The average results of detecting the cylindrical objects at the rst
stage in both the rst and second datasets. . . . . . . . . . . . . . . . . 103
Table 4.3 The average results of detecting the cylindrical objects at the
second stage in both the rst and second datasets. . . . . . . . . . . . . 106
Table 4.4 The average processing time of detecting cylindrical objects in
both the rst and second datasets. . . . . . . . . . . . . . . . . . . . . 106
Table 4.5 The average results of 3-D queried objects detection. . . . . . . . 116
viii
LIST OF FIGURES
Figure 1 Illustration of a real scenario: a VIP comes to the Kitchen and
gives a query: "Where is a coee cup? " on the table. Left panel shows
a Kinect mounted on the human's chest. Right panel: the developed
system is build on a Laptop PC. . . . . . . . . . . . . . . . . . . . . . . 2
Figure 2 Illustration of the process of 3-D query-based object in the indoor
environment.  ... tCalibration. [Online; accessed 10-
January-2018].
[100] Nieuwenhuisen M., Stuckler J., Berner A., Klein R., and Behnke S. (2012). Shape-
primitive based object recognition and grasping shape primitive detection and ob-
ject recognition. In The 7th German conference on Robotics , May.
[101] Nieuwenhuisen M., Stueckler J., Berner A., Klein R., and Behnke S. (2012).
Shape-primitive based object recognition and grasping . In Proc. of ROBOTIK .
VDE-Verlag.
[102] Nikolakis G., Tzovaras D., and Strintzis M.G. Object recognition for the blind .
(30):pp. 1{4.
[103] OpenCV (2018). Opencv library . https://opencv.org/. [Online; accessed 10-
January-2018].
[104] Osselman G., Gorte B., Sithole G., and Rabbani T. (2004). Recognising structure
in laser scanner point clouds . In International Archives of Photogrammetry,
Remote Sensing and Spatial Information Sciences , p. 33{38.
[105] Pang G. and Neumann U. (2016). 3D Point Cloud Object Detection with Multi-
View Convolutional Neural Network . In 23rd International Conference on Pat-
tern Recognition.
[106] (PCL) P.C.L. (2013). Point cloud library (pcl) 1.7.0 . 
pointclouds.org/1.7.0/mlesac_8hpp_source.html.
[107] (PCL) P.C.L. (2014). How to use random sample consensus model .
consensus.php.
[108] Polewski P., Yao W., Heurich M., Krzystek P., and Stilla U. (2017). A voting-
based statistical cylinder detection framework applied to fallen tree mapping in
133
terrestrial laser scanning point clouds . ISPRS Journal of Photogrammetry and
Remote Sensing , Vol. 129:pp. pp 118{130.
[109] Press W., Teukolsky S., Vetterling W.T., and Flannery B.P. (2007). Numerical
recipes: The art of scientic computing . Cambridge University Press , pp. pp.
1099{ 1110.
[110] Qingming Z., Yubin L., and Yinghui X. (2009). Color-based segmentation of point
clouds . Laser scanning 2009, IAPRS .
[111] Radu B., Nico B., and Michael B. (2009). Fast point feature histograms (fpfh) for
3d registration. In IEEE International Conference on Robotics and Automation,
pp. pp3212 { 3217, DOI: 10.1109/ROBOT.2009.5152473.
[112] Raguram R., Chum O., Pollefeys M., Matas J., and Frahm J.M. (Aug 2013).
Usac: A universal framework for random sample consensus . IEEE Transactions
on Pattern Analysis and Machine Intelligence, 35(8):pp. 2022{2038.
[113] Raguram R., Frahm J.M., and Pollefeys M. (2008). A comparative analysis of
ransac techniques leading to adaptive real-time random sample consensus . In
Procedings of the European Conference on Computer Vision. (ECCV'08), pp.
500{513.
[114] Redmon J., Divvala S., Girshick R., and Farhadi A. (2016). You Only Look
Once: Unied, Real-Time Object Detection. In Computer Vision and Pattern
Recognition.
[115] Redmon J. and Farhadi A. (2017). YOLO9000: Better, Faster, Stronger . In
Computer Vision and Pattern Recognition.
[116] Ren S., He K., Girshick R., and Sun J. (2015). Faster r-cnn: Towards real-time
object detection with region proposal networks . In Advances in Neural Information
Processing Systems 28 , pp. 91{99.
[117] Richtsfeld A., Morwald T., Prankl J. andZillich M., and Vincze M. (2012). Seg-
mentation of unknown objects in indoor environments . In 2012 IEEE/RSJ In-
ternational Conference on Intelligent Robots and Systems , pp. 4791{4796.
[118] Ridwan M., Choudhury E., Poon B., Amin M.A., and Yan H. (2014). A naviga-
tional aid system for visually impaired using microsoft kinect . In International
MultiConference of Engineers and Computer Scientists , volume I.
[119] Rimon S., Peter B., Julian S., Benjamin H.G., Christine F.M., Eva D., Joerg F.,
and Bjoern M.E. (2016). Blind path obstacle detector using smartphone camera
134
and line laser emitter . In International Conference on Technology and Innovation
in Sports, Health and Wellbeing (TISHW 2016).
[120] Robert C., Emmanuel K.N., and Ratko G. (2016). Survey of state-of-the-art
point cloud segmentation methods . Technical Report: Josip Juraj Strossmayer
University of Osijek .
[121] Rusu B. Cluster recognition and 6dof pose estimation using vfh descriptors . http:
//pointclouds.org/documentation/tutorials/vfh_recognition.php. [On-
line; accessed 20-January-2018].
[122] Rusu B. Euclidean cluster extraction. 
documentation/tutorials/cluster_extraction.php. [Online; accessed 20-
January-2018].
[123] Rusu B. Fast point feature histograms (fpfh) descriptors . 
org/documentation/tutorials/fpfh_estimation.php#fpfh-estimation.
[Online; accessed 20-January-2018].
[124] Rusu B. Fast point feature histograms (fpfh) descriptors . 
org/documentation/tutorials/pfh_estimation.php#pfh-estimation. [On-
line; accessed 20-January-2018].
[125] Rusu B., Bradski G., Thibaux R., and Hsu J. (2010). Fast 3d recognition and
pose using the viewpoint feature histogram. pp. 2155 { 2162. 2010 IEEE/RSJ
International Conference on Intelligent Robots and Systems.
[126] Saad B. (2015). Hough Transform and Thresholding . 
courses/me5286/vision/Notes/2015/ME5286-Lecture9.pdf. [Online; ac-
cessed 18-Septemper-2017].
[127] Saoury R., Blank P., Sessner J., Groh B.H., Martindale C.F., and Dorschky
E. (2016). Blind path obstacle detector using smartphone camera and line laser
emitter . In Proceedings of 1st International Conference on Technology and In-
novation in Sports, Health and Wellbeing , Tishw.
[128] Saval-Calvo M., Azorin-Lopez J., Guillo A.F., and Rodriguez J.G. (2017). Three-
dimensional planar model estimation using multi-constraint knowledge based on
k-means and RANSAC . CoRR, abs/1708.01143.
[129] Scharstein D. and Szeliski R. (2003). High-Accuracy Stereo Depth Maps Using
Structured Light . In IEEE Computer Society Conference on Computer Vision
and Pattern Recognition, 1(June):pp. 195{202.
135
[130] Schauerte B., Martinez M., and Constantinescu A. (2012). An Assistive Vision
System for the Blind that Helps Find Lost Things . In International Conference
on Computers for Handicapped Persons , volume 2011, pp. pp 566{572.
[131] Schnabel R., Wahl R., and Klein R. (2007). Ecient ransac for point-cloud shape
detection. Computer Graphics Forum, 26(2):pp. 214{226.
[132] Silberman N. and Fergus R. (2011). Indoor scene segmentation using a structured
light sensor . In Proceedings of the International Conference on Computer Vision-
Workshop on 3D Representation and Recognition.
[133] Silberman N., HoiemPushmeet D., and Fergus K. (2012). Indoor segmentation
and support inference from rgbd images . In European Conference on Computer
Vision, pp. pp 746{760.
[134] Steder B., Rusu R.B., Konolige K., and Burgard W. (October 8, 2010 2010).
Narf: 3d range image features for object recognition. In Workshop on Dening
and Solving Realistic Perception Problems in Personal Robotics at the IEEE/RSJ
Int. Conf. on Intelligent Robots and Systems (IROS). Taipei, Taiwan.
[135] Stein F. and Medioni G. (1992). Structural indexing: Ecient 3D object recogni-
tion. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume:
14(Issue: 2):pp. 125 { 145.
[136] Su Y.T., Hua S., and Bethel J.S. (2017). Estimation of cylinder orientation in
three-dimensional point cloud using angular distance-based optimization. Optical
Engineering , Volume 56(Issue 5).
[137] Subaihi A.A. (2016). Orthogonal Least Squares Fitting with Cylinders . Interna-
tional Journal of Computer Mathematics , 7160(February).
[138] Sudhakar K., Saxena P., and Soni S. (2012). Obstacle detection gadget for visually
impaired peoples . International Journal of Emerging Technology and Advanced
Engineering , 2(12):pp. 409{413.
[139] Sujith B. and Safeeda V. (2014). Computer vision-based aid for the visually
impaired persons- a survey and proposing . International Journal of Innovative
Research in Computer and Communication Engineering , pp. 365{370.
[140] Tombari F., SaltiLuigi S., and Stefano D. (2010). Unique Signatures of His-
tograms for Local Surface Description. In European Conference on Computer
Vision, pp. pp 356{369.
136
[141] Tombari F. and Stefano L.D. (2012). Hough voting for 3d object recognition under
occlusion and clutter . IPSJ Transactions on Computer Vision and Applications ,
4:pp. 20{29.
[142] Torr P.H.S. and Murray D. (1997). The development and comparison of robust
methods for estimating the fundamental matrix . International Journal of Com-
puter Vision, 24(3):p. 271{300.
[143] Torr P.H.S. and Zisserman A. (2000). Mlesac: A new robust estimator with
application to estimating image geometry . Computer Vision and Image Under-
standing , 78(1):pp. 138{156.
[144] Trung-Thien T., Van-Toan C., and Denis L. (2015). Extraction of cylinders
and estimation of their parameters from point clouds . Computers and Graphics ,
46:pp. 345{357.
[145] Trung-Thien T., Van-Toan C., and Denis L. (2015). Extraction of reliable prim-
itives from unorganized point clouds . 3D Research, 6:44.
[146] Trung-Thien T., Van-Toan C., and Denis L. (2016). esphere: extracting spheres
from unorganized point clouds . The Visual Computer , Volume 32(No.10):p. pp
1205{1222.
[147] Van Hamme D.and Veelaert P. and Philips W. (2011). Robust visual odometry
using uncertainty models . In Advanced Concepts for Intelligent Vision Systems.
ACIVS 2011. Lecture Notes in Computer Science, vol 6915. Springer, Berlin,
Heidelberg , pp. 1{12. ISBN 978-3-642-23686-0. doi:10.1007/978-3-642-23687-7 1.
[148] Virgil T., Popescu S., Bogdanov I., and Caleanu C. (2008). Obstacles detection
system for visually impaired guidance department of applied electronics . In 2th
WSEAS International Conference on SYSTEMS , September 2017.
[149] Wang H. Mirota D.I.M. and Hager G. (2008). Robust motion estimation and
structure recovery from endoscopic image sequences with an adaptive scale kernel
consensus estimator .
[150] Wang H. and Suter D. (2004). Robust adaptive-scale parametric model estima-
tion for computer vision. IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol.26(No.11):p. pp.1459{1474.
[151] Wattal A., Ojha A., and Kumar M. (2016). Obstacle detection for visually im-
paired using raspberry pi and ultrasonic sensors . In National Conference on
Product Design, July, pp. 1{5.
137
[152] Wittrowski J., Ziegler L., and Swadzba A. (2013). 3d implicit shape models using
ray based hough voting for furniture recognition. In International Conference on
3D Vision - 3DV .
[153] Xiang Y., Kim W., Chen W., Ji J., Choy C., Su H., Mottaghi R., Guibas L.,
and Savarese S. (2016). ObjectNet3D : A Large Scale Database for 3D Object
Recognition. In European Conference on Computer Vision, pp. pp 160{176.
[154] Xiang Y., Mottaghi R., and Savarese S. (2014). Beyond pascal: A benchmark for
3d object detection in the wild . In IEEE Winter Conference on Applications of
Computer Vision (WACV).
[155] Yang M.Y. and Forstner W. (2010). Plane detection in point cloud data. Tech-
nical report Nr.1 of Department of Photogrammetry, Institute of Geodesy and
Geoinformation, University of Bonn.
[156] Yang S.w., Wang C.c., and Chang C.h. (2010). RANSAC Matching : Simul-
taneous Registration and Segmentation. In IEEE International Conference on
Robotics and Automation.
[157] Yi C., Flores R.W., Chincha R., and Tian Y. (2014). Finding objects for assisting
blind people. Network Modeling Analysis in Health Informatics and Bioinformat-
ics , Volume 2(2):pp. pp 71{79.
[158] Yoo H.W., Kim W.H., Park J.W., Lee W.H., and Chung M.J. (2013). Real-time
plane detection based on depth map from kinect . In International Symposium on
Robotics (ISR2013).
[159] Zhong Y. (2009). Intrinsic Shape Signatures : A Shape Descriptor for 3D Object
Recognition. In 2009 IEEE 12th International Conference on Computer Vision
Workshops (ICCV Workshops).
[160] Zhou X. (2012). A Study of Microsoft Kinect Calibration. Technical report Dept.
of Computer Science George Mason University .
[161] Zollner M., Huber S., Jetter H.c., and Reiterer H. (2011). Navi { a proof-of-
concept of a mobile navigational aid for visually impaired based on the microsoft
kinect . In IFIP Conference on Human-Computer Interaction, pp. pp 584{587.
138
PUBLICATIONS OF DISSERTATION
[1] Van-Hung Le, Hai Vu, Thuy Thi Nguyen, Thi Lan Le, and Thanh Hai Tran
(2015). Table plane detction using geometrical constraints on depth image, The 8th
Vietnamese Conference on Fundamental and Applied IT Research, FAIR, Hanoi,
VietNam, ISBN: 978-604-913-397-8, pp.647-657.
[2] Van-Hung Le, Hai Vu, Thuy Thi Nguyen, Thi-Lan Le, Thi-Thanh-Hai Tran,
Michiel Vlaminck, Wilfried Philips and Peter Veelaert. (2015). 3D Object Finding
Using Geometrical Constraints on Depth Images, The 7th International Conference
on Knowledge and Systems Engineering, HCM city, Vietnam, ISBN 978-1-4673-
8013-3, pp.389-395.
[3] Van-Hung Le, Thi-Lan Le, Hai Vu, Thuy Thi Nguyen, Thanh-Hai Tran,
TranChung Dao and Hong-Quan Nguyen (2016), Geometry-based 3-D Object Fitting
and Localization in Grasping Aid for Visually Impaired People, The 6th International
Conference on Communications and Electronics (IEEE-ICCE), HaLong, Vietnam,
ISBN: 978-1-5090-1802-4, pp.597-603.
[4] Van-Hung Le, Michiel Vlaminck, Hai Vu, Thuy Thi Nguyen, Thi-Lan Le,
ThanhHai Tran, Quang-Hiep Luong, Peter Veelaert and Wilfried Philips (2016),
Real-time table plane detection using accelerometer and organized point cloud data
from Kinect sensor, Journal of Computer Science and Cybernetics, Vol. 32, N.3,
ISSN: 1813-9663, pp. 243-258.
[5] Van-Hung Le, Hai Vu, Thuy Thi Nguyen, Thi-Lan Le, Thanh-Hai Tran (2017),
Fitting Spherical Objects in 3-D Point Cloud Using the Geometrical constraints.
Journal of Science and Technology, Section in Information Technology and Commu-
nications, Number 11, 12/2017, ISSN: 1859-0209, pp 5-17.
[6] Van-Hung Le, Hai Vu, Thuy Thi Nguyen, Thi-Lan Le, Thanh-Hai Tran (2018),
Acquiring qualied samples for RANSAC using geometrical constraints, Pattern
Recognition Letters, Vol. 102, ISSN: 0167-8655, pp. 58-66, (ISI).
[7] Van-Hung Le, Hai Vu, Thuy Thi Nguyen (2018), A Comparative Study on Detec-
tion and Estimation of a 3-D Object Model in a Complex Scene, 10th International
Conference on Knowledge and Systems Engineering (KSE 2018), pp. 203-208.
[8] Van-Hung Le, Hai Vu, Thuy Thi Nguyen, Thi-Lan Le, Thanh-Hai Tran (2018),
GCSAC: geometrical constraint sample consensus for primitive shapes estimation in
3D point cloud, International Journal Computational Vision and Robotics, Accepted
(SCOPUS).
[9] Van-Hung Le, Hai Vu, Thuy Thi Nguyen (2018), A Frame-work assisting the
Visually Impaired People: Common Object Detection and Pose Estimation in Sur-
rounding Environment, 5th Nafosted Conference on (NICS 2018), pp. 218-223.
[10] Hai Vu, Van-Hung Le, Thuy Thi Nguyen, Thi-Lan Le, Thanh-Hai Tran (2019),
Fitting Cylindrical Objects in 3-D Point Cloud Using the Context and Geometri-
cal constraints, Journal of Information Science and Engineering, ISSN: 1016-2364,
Vol.35, N1, (ISI).
140

File đính kèm:

  • pdf3_d_object_detections_and_recognitions_assisting_visually_im.pdf
  • pdfluanvan_abstract_english.pdf
  • pdfluanvan_abstract_vietnamese.pdf
  • pdfThông tin đưa lên mạng.pdf
  • pdfTRÍCH YẾU LUẬN ÁN.pdf