In the last few years, extensive effort has been spent to develop better performed 3-D object retrieval methods. View-based methods have attracted a significant amount of attention, not only because their state-of-art performance, but also they merely require some of a 3-D object’s 2-D view images. However, most recent approaches only deal with the images’ primordial-extracted features and ignore their hidden relationships. Considering these latent characters, a visual-topic-model 3-D object retrieval approach is introduced in this paper. In this framework, dense scale invariant feature transform(dense-SIFT) descriptors are extracted from a set of views of each 3-D object, and all the dense-SIFT descriptors are grouped into bag-of-word features using k-means clustering. Then, the topic distribution of a 3-D object is generated via latent dirichlet allocation (LDA) given its bag-of-word features. Gibbs sampling is applied in the learning and inference processing of LDA. We conduct experiments on the Princeton Shape Benchmark (PSB) and National Taiwan University 3D model database (NTU), and the experimental results demonstrate that the proposed method can achieve better retrieval effectiveness than the state-of-the-art methods under several standard evaluation measures.