Deep learning (DL) techniques have been widely used in prestack three-parameter inversion to address its ill-posed problems. Among these DL techniques, Multi-task learning (MTL) methods can ...simultaneously train multiple tasks, thereby enhancing model generalization and predictive performance. However, existing MTL methods typically adopt heuristic or non-heuristic approaches to jointly update the gradient of each task, leading to gradient conflicts between different tasks and reducing inversion accuracy. To address this issue, we propose a semi-supervised temporal convolutional network (STCN) based on Nash equilibrium (Nash-MTL-STCN). Firstly, temporal convolutional networks (TCN) with non-causal convolution and convolutional neural networks (CNNs) are used as multi-task layers to extract the shared features from partial angle stack seismic data, with CNNs serving as the single-task layer. Subsequently, the feature mechanism is utilized to extract shared features in the multi-task layer through hierarchical processing, and the gradient combination of these shared features is treated as a Nash game for strategy optimization and joint updates. Ultimately, the overall utility of the three-parameter is maximized, and gradient conflicts are alleviated. In addition, to enhance the network's generalization and stability, we have incorporated geophysical forward modeling and low-frequency models into the network. Experimental results demonstrate that the proposed method overcomes the gradient conflict issue of the conventional MTL methods with constant weights (CW) and achieves higher precision than four widely used non-heuristic MTL methods. Further field data experiments also validate the method's effectiveness.
Seismic impedance inversion is one of the most important part of geophysical exploration. However, due to random noise, the traditional semi-supervised learning (SSL) methods lack generalization and ...stability. To solve this problem, some authors have proposed SSL methods with anti-noise function to improve noise robustness and inversion accuracy. However, such methods are often not ideal when faced with strong noise. In addition, Low-frequency impedance models can mitigate this problem, but creating accurate low-frequency models is difficult and error-prone when well-log data is sparse and subsurface structures are complex. To address those issues, we propose a novel deep learning inversion method called DSIM-USSL (Unsupervised and Semi-supervised joint Learning for Seismic Inversion based on diffusion model). Specifically, we are the first to introduce a diffusion model with strong noise tendency and construct a diffusion seismic inversion model (DSIM). In the reverse diffusion of DSIM, we design the encoder-decoder which combines CNN for capturing local features and GRU for global sequence modeling; and we choose U-net to learn the distribution of random noise, enhancing the generalization and stability of proposed method. Furthermore, to further improve generalization of the proposed method, a two-step training approach (USSL) is utilized. First, an unsupervised trained encoder-decoder is used as the initial network model in place of the traditional low-frequency wave impedance model that is difficult to accurately acquire. Then, the SSL is employed to further optimize the encoder-decoder model. Experimental results on the Marmousi2 model and field data demonstrate that the DSIM-USSL method achieves higher accuracy in the presence of seismic data with random noise, and maintains high stability even under strong noise conditions.
To construct a three-dimensional finite element model of the upper airway and adjacent structure of an obstructive sleep apnea hypopnea syndrome (OSAHS) patient for biomechanical analysis. And to ...study the influence of glossopharyngeum of an OSAHS patient with three-dimensional finite element model during titrated mandible advancement.
DICOM format image information of an OSAHS patient's upper airway was obtained by thin-section CT scanning and digital image processing were utilized to construct a three-dimensional finite element model by Mimics 10.0, Imageware 10.0 and Ansys software. The changes and the law of glossopharyngeum were observed by biomechanics and morphology after loading with titrated mandible advancement.
A three-dimensional finite element model of the adjacent upper airway structure of OSAHS was established successfully. After loading, the transverse diameter of epiglottis tip of glossopharyngeum increased significantly, although the sagittal diameter decreased correspondingly. The principal
Comment generation, a new and challenging task in Natural Language Generation (NLG), attracts a lot of attention in recent years. However, comments generated by previous work tend to lack pertinence ...and diversity. In this paper, we propose a novel generation model based on Topic-aware Pointer-Generator Networks (TPGN), which can utilize the topic information hidden in the articles to guide the generation of pertinent and diversified comments. Firstly, we design a keyword-level and topic-level encoder attention mechanism to capture topic information in the articles. Next, we integrate the topic information into pointer-generator networks to guide comment generation. Experiments on a large scale of comment generation dataset show that our model produces the valuable comments and outperforms competitive baseline models significantly.
We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of ...models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone. The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered web data and synthetic data. The model is also further aligned for robustness, safety, and chat format. We also provide some initial parameter-scaling results with a 7B and 14B models trained for 4.8T tokens, called phi-3-small and phi-3-medium, both significantly more capable than phi-3-mini (e.g., respectively 75% and 78% on MMLU, and 8.7 and 8.9 on MT-bench).