•This work proposes a primitive-oriented cloth simulation method for ground filtering of large-scale 3D meshes from airborne platforms.•This method is not affected by low and unbalanced density ...distribution of mesh vertices.•This method does not require a coherent noise-free 3D mesh.•This work presents a labeled dataset for ground filtering on 3D meshes.
Airborne platforms have been improved in the past decade to provide geographic information systems (GISs) with large-scale 3D geographical information. Objectification of such information organized in meshes is a significant challenge for 3D GISs. The ground filtering of 3D meshes is a key step in meeting this challenge, however, its accuracy is highly affected by negative blunders and unbalanced vertex density. This paper proposes a novel method for differentiating ground geometric primitives from realistic 3D meshes based on a cloth simulation filter. Within the method, the fall of a piece of cloth is simulated on a flipped 3D mesh, and the stationary shape of the cloth is considered to be the fitted ground. Utilizing the spatial continuity of meshes, a collision detection based on bounding volume hierarchy is introduced, making the results independent of vertex density. Further, a collision correction based on the scan line and ray casting is proposed to make it applicable to data with negative blunders. The method is assessed quantitatively and visually over several datasets with different vertex densities, scenes, and noise distributions. Results demonstrate that it is a robust method suitable for different landscapes and is not impacted by vertex density and noise.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
•A 3D model encryption algorithm (3DME-SC) is proposed.•A 2D chaotic system (2D-LAIC), which has good dynamic behavior, is proposed.•Simulation experiments show that 3DME-SC exhibits good security ...characteristics and effectiveness.
With the birth of the metaverse, 3D models have received extensive attention, and the security of information transmission continues to be an important issue. In this paper, we propose a 3D model encryption method based on a 2D chaotic system constructed via the coupling of the logistic map and infinite collapse (2D-LAIC) and on semi-tensor product (STP) theory. In terms of Lyapunov exponents, NIST test results, bifurcation diagrams, etc., 2D-LAIC exhibits better dynamical behavior than classical chaotic systems. 2D-LAIC can generate an unpredictable keystream, which is highly suitable for cryptography. Therefore, we propose a new 3D model encryption algorithm based on 2D-LAIC, named 3DME-SC. For a 3D model of the floating-point data type, XOR and STP processing are applied to the integer part and fractional part, respectively, of the model to obtain a 3D ciphertext model. The keystream required for XOR and STP processing is generated by 2D-LAIC. The results of a detailed security analysis and a comparative experimental analysis show that 3DME-SC exhibits good performance and effectiveness.
(Code: https://github.com/Gao5211996/3D-model-encryption)
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
In the past, in vitro studies of invasion and tumor progression were performed primarily using cancer cells cultured on a flat, two‐dimensional (2D) surface in a monolayer. In recent years, however, ...many studies have demonstrated differences in cell signaling and cell migration between 2D and 3D cell cultures. Traditional 2D monolayer cancer cell invasion models do not fully recapitulate 3D cell‐to‐cell and cell−to−extracellular matrix interactions that in vivo models can provide. Moreover, although in vivo animal models are irreplaceable for studying tumor biology and metastasis, they are costly, time‐consuming, and impractical for answering preliminary questions. Thus, emergent and evolving 3D spheroid cell culture models have changed the way we study tumors and their interactions with their surrounding extracellular matrix. In the case of breast cancer, metastasis of breast cancer tumors results in high mortality rates, and thus development of robust cell culture models that are reproducible and practical for studying breast cancer progression is important for ultimately developing preventatives for cancer metastasis. This article provides a set of protocols for generating uniform spheroids with a thin sheet of basement membrane for studying the initial invasion of mammary epithelial cells into a surrounding collagen‐rich extracellular matrix. Details are provided for generating 3D spheroids with a basement membrane, polymerizing collagen I, embedding the spheroids in the 3D collagen gel, and immunostaining the spheroids for invasion studies. Published 2020. U.S. Government.
Basic Protocol 1: Growth of uniformly sized tumor spheroids with an encapsulating basement membrane
Basic Protocol 2: Polymerization and embedding of tumor spheroids in a 3D type I collagen gel
Alternate Protocol: Embedding of tumor spheroids in collagen gels using a sandwich method
Basic Protocol 3: Fixing and immunostaining of tumor spheroids embedded in 3D collagen gels
Multi-view 3D shape classification, which identifies a 3D shape based on its 2D views rendered from different viewpoints, has emerged as a promising method of shape understanding. A key building ...block in these methods is cross-view feature aggregation. However, existing methods dominantly follow the “extract-then-aggregate” pipeline for view-level global feature aggregation, leaving cross-view pixel-level feature interaction under-explored. To tackle this issue, we develop a “fuse-while-extract” pipeline, with a novel View-aligned Pixel-level Fusion (VPF) module to fuse cross-view pixel-level features originating from the same 3D part. We first reconstruct the 3D coordinate of each feature via the rasterization results, then match and fuse the features via spatial neighbor searching. Incorporating the proposed VPF module with ResNet18 backbone, we build a novel view-aligned multi-view network, which conducts feature extraction and cross-view fusion alternatively. Extensive experiments have demonstrated the effectiveness of the VPF module as well as the excellent performance of the proposed network.
•A novel “fuse-while-extract” pipeline for multi-view 3D model recognition.•A module fuses pixel-level features based on their receptive fields.•A novel method performs feature fusion on both view-level and pixel-level features.•The proposed method achieves SOTA on 3D model classification and retrieval.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, NLZOH, NUK, OILJ, UILJ, UL, UPCLJ, UPUK, ZRSKP
Recently, multispectral images can be captured not only from satellite sensors but also from cameras. Hence, using the photogrammetric approach, multispectral images can be manipulated to generate a ...three-dimensional model. The main issues regarding multispectral images were the low visibilities of the image features. Moreover, the tie point extractions on multispectral images were still in doubt. Hence, this paper examines the capabilities of the SIFT algorithm to extract feature points from multispectral images and generate the point cloud from the extracted feature points. This study chose a pothole as the subject of this research. The red, red edge, green, and near-infrared bands from the Parrot Sequoia camera were used to generate the pothole model. All captured images were processed using structure-from-motion (SfM) with Multi-View Stereo (MVS) technique. This study records the feature points extraction result and analysis of the pothole model and discuss it in this paper.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK
The management of cultural heritage leads to the creation of digital files that contain representations of cultural heritage elements. Over the last few decades, a number of technologies designed for ...creating 3D models of objects or scenes have been developed, and the number of digitisation platforms has increased. Despite the dissemination of 3D models on these platforms, generally, the methodologies used appears to be undocumented and there is a lack of standards for providing metadata for creating 3D models. In this paper, a metadata schema for 3D models of archaeological objects is proposed, addressing the possible techniques used to create and process such models and considering that not all digital objects are the same, as they are produced in different ways and with different metrics and chromatic accuracies. Additionally, a core of the most crucial components is suggested to alleviate the difficulty of producing exhaustive metadata to represent the entire procedure.
•Proper cultural heritage (CH) management requires reliable metadata.•There is a lack of standards for creating metadata for capturing 3D models for cultural heritage.•Our proposal provides a schema that considers the techniques used to create 3D models.•A core of the most crucial components in the schema is proposed.•Users should know the technical aspects of the capture and processing of a 3D model through metadata.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
3D model security protection has become increasingly essential due to its wide engineering applications. Encryption is a common method to protect 3D models. Jansent et al. recently proposed a novel ...encryption method for 3D models by introducing hierarchical decryption to allow differential visual effects after decryption, in which some bits are selected to form three blocks and then encrypted separately. However, this method does not support the adaptive combination of visual effects, potentially limiting its ability to meet diverse requirements. To this end, we propose a 3D model encryption method supporting adaptive visual effects after decryption, in which some bits of the vertex coordinate value can flexibly be extracted within a reasonable range. Meanwhile, we accordingly optimize some steps in the method and redefine visual security levels to accurately outline how much visual information can be accessed by authorized users. Experimental results show that our method supports adaptively combining different visual effects during decryption while reducing the time cost.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
We present a novel approach for the automatic creation of a personalized high-quality 3D face rig of an actor from just monocular video data (e.g., vintage movies). Our rig is based on three distinct ...layers that allow us to model the actor’s facial shape as well as capture his person-specific expression characteristics at high fidelity, ranging from coarse-scale geometry to fine-scale static and transient detail on the scale of folds and wrinkles. At the heart of our approach is a parametric shape prior that encodes the plausible subspace of facial identity and expression variations. Based on this prior, a coarse-scale reconstruction is obtained by means of a novel variational fitting approach. We represent person-specific idiosyncrasies, which cannot be represented in the restricted shape and expression space, by learning a set of medium-scale corrective shapes. Fine-scale skin detail, such as wrinkles, are captured from video via shading-based refinement, and a generative detail formation model is learned. Both the medium- and fine-scale detail layers are coupled with the parametric prior by means of a novel sparse linear regression formulation. Once reconstructed, all layers of the face rig can be conveniently controlled by a low number of blendshape expression parameters, as widely used by animation artists. We show captured face rigs and their motions for several actors filmed in different monocular video formats, including legacy footage from YouTube, and demonstrate how they can be used for 3D animation and 2D video editing. Finally, we evaluate our approach qualitatively and quantitatively and compare to related state-of-the-art methods.
Unsupervised 3D model analysis has attracted tremendous attentions with the increasing growth of 3D model data and the extensive human annotations. Many effective methods have been designed to ...address the 3D model analysis with labeled information, while rare methods devote to unsupervised deep learning due to the difficulty of mining reliable information. In this paper, we propose a novel unsupervised deep learning method named joint local correlation and global contextual information (LCGC) for 3D model retrieval and classification, which mines the reliable triplet set and uses triplet loss to optimize the deep neural network. Our method proposes two schemes: 1) Local self-correlation information learning, which adopts the intra and inter information to construct the view-level triplet set. 2) Global neighbor contextual information learning, which employs the neighbor contextual information to explore the reliable relations among 3D models and construct the model-level triplet set. The above schemes encourage that the selected triple set can been used to improve the discrimination of learned features. Extensive evaluations on two large-scale datasets, ModelNet40 and ShapeNet55, have demonstrated the effectiveness of our proposed method.