Searching for relevant 3D models based on hand-drawn sketches is both intuitive and important for many applications, such as sketch-based 3D modeling and recognition, human computer interaction, 3D ...animation, game design, and etc. In this paper, our target is to significantly improve the current sketch-based 3D retrieval performance in terms of both accuracy and efficiency. We propose a new sketch-based 3D model retrieval framework by utilizing adaptive view clustering and semantic information. It first utilizes a proposed viewpoint entropy-based 3D information complexity measurement to guide adaptive view clustering of a 3D model to shortlist a set of representative sample views for 2D-3D comparison. To bridge the gap between the query sketches and the target models, we then incorporate a novel semantic sketch-based search approach to further improve the retrieval performance. Experimental results on several latest benchmarks have evidently demonstrated our significant improvement in retrieval performance.
•Build a large-scale 3D shape retrieval benchmark that supports multi-modal queries.•Evaluate the 26 3D shape retrieval methods using 3 types of metrics.•Solicit and identify state-of-the-art methods ...and promising related techniques.•Perform detailed analysis on diverse methods w.r.t accuracy and efficiency.•Make benchmark and evaluation tools freely available to the community.
Large-scale 3D shape retrieval has become an important research direction in content-based 3D shape retrieval. To promote this research area, two Shape Retrieval Contest (SHREC) tracks on large scale comprehensive and sketch-based 3D model retrieval have been organized by us in 2014. Both tracks were based on a unified large-scale benchmark that supports multimodal queries (3D models and sketches). This benchmark contains 13680 sketches and 8987 3D models, divided into 171 distinct classes. It was compiled to be a superset of existing benchmarks and presents a new challenge to retrieval methods as it comprises generic models as well as domain-specific model types. Twelve and six distinct 3D shape retrieval methods have competed with each other in these two contests, respectively. To measure and compare the performance of the participating and other promising Query-by-Model or Query-by-Sketch 3D shape retrieval methods and to solicit state-of-the-art approaches, we perform a more comprehensive comparison of twenty-six (eighteen originally participating algorithms and eight additional state-of-the-art or new) retrieval methods by evaluating them on the common benchmark. The benchmark, results, and evaluation tools are publicly available at our websites (http://www.itl.nist.gov/iad/vug/sharp/contest/2014/Generic3D/, 2014, http://www.itl.nist.gov/iad/vug/sharp/contest/2014/SBR/, 2014).
•Build a small scale and a large scale sketch-based 3D model retrieval benchmark.•Evaluate 15 best sketch-based 3D model retrieval algorithms on the two benchmarks.•Solicit and identify the ...state-of-the-art methods and promising related techniques.•Incisive analysis on diverse methods w.r.t scalability and efficiency performance.•The benchmarks and evaluation tools provide good reference to the related community.
Sketch-based 3D shape retrieval has become an important research topic in content-based 3D object retrieval. To foster this research area, two Shape Retrieval Contest (SHREC) tracks on this topic have been organized by us in 2012 and 2013 based on a small-scale and large-scale benchmarks, respectively. Six and five (nine in total) distinct sketch-based 3D shape retrieval methods have competed each other in these two contests, respectively. To measure and compare the performance of the top participating and other existing promising sketch-based 3D shape retrieval methods and solicit the state-of-the-art approaches, we perform a more comprehensive comparison of fifteen best (four top participating algorithms and eleven additional state-of-the-art methods) retrieval methods by completing the evaluation of each method on both benchmarks. The benchmarks, results, and evaluation tools for the two tracks are publicly available on our websites 1,2.
Modern computer graphics applications commonly feature very large virtual environments and diverse characters which perform different kinds of motions. To accelerate path planning in such a scenario, ...we propose the
subregion graph
data structure. It consists of subregions, which are clusters of locally connected waypoints inside a region, as well as subregion connectivities. We also present a fast algorithm to automatically generate a subregion graph from an enhanced waypoint graph map representation, which also supports various motion types and can be created from large virtual environments. Nevertheless, a subregion graph can be generated from any graphbased map representation. Our experiments show that a subregion graph is very compact relative to the input waypoint graph. By firstly planning a subregion path, and then limiting waypoint-level planning to this subregion path, over 8 times average speedup can be achieved, while average length ratios remain as low as 102.5%.
In this paper, we propose an approach to interactively author the bending and twisting motions of short plants using hand gestures, especially suitable for grass, flowers, and leaves. Our method is ...based on the observations that hand motions can represent the bending and twisting motions of short plants and using a hand to describe motions is natural and proficient for human. We therefore use a hand as a “puppet” to author the animation of one single short plant based on transferring the motions of a hand to the motions of a short plant. We first author the global motions of the short plant followed by the motions of its elements such as leaves and flowers. We also propose a framework to utilize the animation results to animate a field of short plants and further adjust the motion effects according to the properties of the short plants, such as rigidity. As a result, users can intuitively and rapidly author and generate their desired motions of short plants under the influence of external forces. Especially, our method is accessible to non‐expert users and suitable for fast prototyping and authoring specific motions of short plants such as in cartoons.
In this paper, we propose an approach to interactively author the bending and twisting motions of short plants using hand gestures, especially suitable for grass, flowers, and leaves. Our method is based on the observations that hand motions can represent the bending and twisting motions of short plants and using a hand to describe motions is natural and proficient for human. We therefore use a hand as a “puppet” to author the animation of one single short plant based on transferring the motions of a hand to the motions of a short plant. We first author the global motions of the short plant followed by the motions of its elements such as leaves and flowers. We also propose a framework to utilize the animation results to animate a field of short plants and further adjust the motion effects according to the properties of the short plants, such as rigidity. As a result, users can intuitively and rapidly author and generate their desired motions of short plants under the influence of external forces. Especially, our method is accessible to non‐expert users and suitable for fast prototyping and authoring specific motions of short plants such as in cartoons.
Sketch-based 3D model retrieval is very important for applications such as 3D modeling and recognition. In this paper, a sketch-based retrieval algorithm is proposed based on a 3D model feature named ...View Context and 2D relative shape context matching. To enhance the accuracy of 2D sketch-3D model correspondence as well as the retrieval performance, we propose to align a 3D model with a query 2D sketch before measuring their distance. First, we efficiently select some candidate views from a set of densely sampled views of the 3D model to align the sketch and the model based on their View Context similarities. Then, we compute the more accurate relative shape context distance between the sketch and every candidate view, and regard the minimum one as the sketch-model distance. To speed up retrieval, we precompute the View Context and relative shape context features of the sample views of all the 3D models in the database. Comparative and evaluative experiments based on hand-drawn and standard line drawing sketches demonstrate the effectiveness and robustness of our approach and it significantly outperforms several latest sketch-based retrieval algorithms.
In many 3D applications, building models in polygon-soup representation are commonly used for the purposes of visualization, for example, in movies and games. Their appearances are fine, however ...geometry-wise, they may have limited information of connectivity and may have internal intersections between their parts. Therefore, they are not well-suited to be directly used in 3D geospatial applications, which usually require geometric analysis. For an input building model in polygon-soup representation, we propose a novel appearance-driven approach to interactively convert it to a two-manifold model, which is more well-suited for 3D geospatial applications. In addition, the level of detail (LOD) can be controlled interactively during the conversion. Because a model in polygon-soup representation is not well-suited for geometric analysis, the main idea of the proposed method is extracting the visual appearance of the input building model and utilizing it to facilitate the conversion and LODs generation. The silhouettes are extracted and used to identify the features of the building. After this, according to the locations of these features, horizontal cross-sections are generated. We then connect two adjacent horizontal cross-sections to reconstruct the building. We control the LOD by processing the features on the silhouettes and horizontal cross-sections using a 2D approach. We also propose facilitating the conversion and LOD control by integrating a variety of rasterization methods. The results of our experiments demonstrate the effectiveness of our method.