In order to improve the user experience and efficiency of human–computer interaction in virtual reality technology, a comprehensive technology combined with high-tech achievements of multi-field is ...studied under the background of Internet of Things to realize the interaction between human and computer in a natural and intelligent way. The research results show that the interactive, simulated natural-state and three-dimensional environment can be formed at the display terminal through the processing and operation of information by computer program, which can make people feel immersed. It can be seen that in the environment of Internet of Things, the research on virtual reality technology should not only lay out top-level design and improve security and transmission efficiency, but also promote industrial application and enhance user stickiness. It would help to reveal the development trend of technology and the industrial layout and help relevant subjects to improve their level of technology R&D (research and development) and formulate competitive offensive and defensive strategies.
Generative artificial intelligence (AI) is a form of AI that can autonomously generate new content, such as text, images, audio, and video. Generative AI provides innovative approaches for content ...production in the metaverse, filling gaps in the development of the metaverse. Products such as ChatGPT have the potential to enhance the search experience, reshape information generation and presentation methods, and become new entry points for online traffic. This is expected to significantly impact traditional search engine products, accelerating industry innovation and upgrading. This paper presents an overview of the technologies and prospective applications of generative AI in the breakthrough of metaverse technology and offers insights for increasing the effectiveness of generative AI in creating creative content.
Display omitted
This work aims to explore the impact of Digital Twins Technology on industrial manufacturing in the context of Industry 5.0. A computer is used to search the Web of Science database to summarize the ...Digital Twins in Industry 5.0. First, the background and system architecture of Industry 5.0 are introduced. Then, the potential applications and key modeling technologies in Industry 5.0 are discussd. It is found that equipment is the infrastructure of industrial scenarios, and the embedded intelligent upgrade for equipment is a Digital Twins primary condition. At the same time, Digital Twins can provide automated real-time process analysis between connected machines and data sources, speeding up error detection and correction. In addition, Digital Twins can bring obvious efficiency improvements and cost reductions to industrial manufacturing. Digital Twins reflects its potential application value and subsequent potential value in Industry 5.0 through the prospect. It is hoped that this relatively systematic overview can provide technical reference for the intelligent development of industrial manufacturing and the improvement of the efficiency of the entire business process in the Industrial X.0 era.
According to different application scenarios of blockchain system, it is generally divided into public chain, private chain and consortium chain. Consortium chain is a typical multi-center ...blockchain, because it has better landing, it is supported by more and more enterprises and governments. This paper analyzes the advantages and problems of Practical Byzantine Fault Tolerance (PBFT) algorithm for the application scenarios of the consortium chain. In order to be more suitable for consortium chains, this paper proposes a new optimized consensus algorithm based on PBFT. Aiming at the shortcomings of PBFT, such as the inability to dynamically join nodes, low multi-node consensus efficiency, and primary master node selection, our optimized algorithm has designed a hierarchical structure to increase scalability and improve consensus efficiency. The simulation results show that compared with PBFT and RAFT, our new consensus algorithm increases the data throughput while supporting more nodes, and effectively reducing the consensus delay and the number of communication times between nodes.
The present work aims to explore the performance of fuzzy system-based medical image processing for predicting the brain disease. The imaging mechanism of NMR (Nuclear Magnetic Resonance) and the ...complexity of human brain tissues cause the brain MRI (Magnetic Resonance Imaging) images to present varying degrees of noise, weak boundaries, and artifacts. Hence, improvements are made over the fuzzy clustering algorithm. A brain image processing and brain disease diagnosis prediction model is designed based on improved fuzzy clustering and HPU-Net (Hybrid Pyramid U-Net Model for Brain Tumor Segmentation) to ensure the model safety performance. Brain MRI images collected from a Hospital, are employed in simulation experiments to validate the performance of the proposed algorithm. Moreover, CNN (Convolutional Neural Network), RNN (Recurrent Neural Network), FCM (Fuzzy C-Means), LDCFCM (Local Density Clustering Fuzzy C-Means), and AFCM (Adaptive Fuzzy C-Means) are included in simulation experiments for performance comparison. Results demonstrate that the proposed algorithm has more nodes, lower energy consumption, and more stable changes than other models under the same conditions. Regarding the overall network performance, the proposed algorithm can complete the data transmission tasks the fastest, basically maintaining at about 4.5 s on average, which performs remarkably better than other models. A further prediction performance analysis reveals that the proposed algorithm provides the highest prediction accuracy for the Whole Tumor under DSC (Dice Similarity Coefficient), reaching 0.936. Besides, its Jaccard coefficient is 0.845, proving its superior segmentation accuracy over other models. In a word, the proposed algorithm can provide higher accuracy, a more apparent denoising effect, and the best segmentation and recognition effect than other models while ensuring energy consumption. The results can provide an experimental basis for the feature recognition and predictive diagnosis of brain images.
This paper designs a smart urban environment monitoring system based on the wireless network of ZigBee to complete the real-time collection of urban environment information. The system consists of ...the basic monitoring network and the remote receiving terminal. The basic monitoring network connects the streetlights as routes and the taxis as nodes. After dynamically organizing the network, each node is assigned with an address as the only identity in the network. Then, the system designed conducts the simulation experiment to prove that it could meet the needs and send the collected information to the designated terminal in the form of message according to the setting. The sensor organized through the wireless network of ZigBee could inspire the infrastructure construction of the smart city. With the network, a smarter and more comfortable society could be well offered to people.
In the process of reconstructing a historical event such as a rock concert only from video, the reconstruction of faces and expressions of the musicians is obviously important. However, in the ...process of rebuilding appearance, because of the low quality of the video of the recorded concert, the result of the reconstruction may be far from the real appearance. In this paper, a robust 3D face reconstruction application is described that can be applied to a video recording. The application first uses DeblurGAN program to run anti-ambiguity calculation and removes the ambiguity in the concert video. Then, the super-resolution program is used to enlarge every frame of the concert video by four times, thus making every frame of the video clearer. Finally, the 3D faces are obtained after 3D reconstruction of the processed video frames via the 3DMM_CNN program.
The purpose is to explore the feature recognition, diagnosis, and forecasting performances of Semi-Supervised Support Vector Machines (S3VMs) for brain image fusion Digital Twins (DTs). Both ...unlabeled and labeled data are used regarding many unlabeled data in brain images, and semi supervised support vector machine (SVM) is proposed. Meantime, the AlexNet model is improved, and the brain images in real space are mapped to virtual space by using digital twins. Moreover, a diagnosis and prediction model of brain image fusion digital twins based on semi supervised SVM and improved AlexNet is constructed. Magnetic Resonance Imaging (MRI) data from the Brain Tumor Department of a Hospital are collected to test the performance of the constructed model through simulation experiments. Some state-of-art models are included for performance comparison: Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), AlexNet, and Multi-Layer Perceptron (MLP). Results demonstrate that the proposed model can provide a feature recognition and extraction accuracy of 92.52%, at least an improvement of 2.76% compared to other models. Its training lasts for about 100 s, and the test takes about 0.68 s. The Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) of the proposed model are 4.91 and 5.59%, respectively. Regarding the assessment indicators of brain image segmentation and fusion, the proposed model can provide a 79.55% Jaccard coefficient, a 90.43% Positive Predictive Value (PPV), a 73.09% Sensitivity, and a 75.58% Dice Similarity Coefficient (DSC), remarkably better than other models. Acceleration efficiency analysis suggests that the improved AlexNet model is suitable for processing massive brain image data with a higher speedup indicator. To sum up, the constructed model can provide high accuracy, good acceleration efficiency, and excellent segmentation and recognition performances while ensuring low errors, which can provide an experimental basis for brain image feature recognition and digital diagnosis.
The emerging technologies for connected vehicles have become hot topics. In addition, connected vehicle applications are generally found in heterogeneous wireless networks. In such a context, user ...terminals face the challenge of access network selection. The method of selecting the appropriate access network is quite important for connected vehicle applications. This paper jointly considers multiple decision factors to facilitate vehicle-to-infrastructure networking, where the energy efficiency of the networks is adopted as an important factor in the network selection process. To effectively characterize users' preference and network performance, we exploit energy efficiency, signal intensity, network cost, delay, and bandwidth to establish utility functions. Then, these utility functions and multi-criteria utility theory are used to construct an energy-efficient network selection approach. We propose design strategies to establish a joint multi-criteria utility function for network selection. Then, we model network selection in connected vehicle applications as a multi-constraint optimization problem. Finally, a multi-criteria access selection algorithm is presented to solve the built model. Simulation results show that the proposed access network selection approach is feasible and effective.
In order to realize the collaborative resource allocation optimization of mobile edge computing (MEC) and reduce the delay of edge server in the transmission process, based on software-defined ...network (SDN) technology, two optimal edge server deployment schemes of Enumeration-Based Optimal Edge Server Placement Algorithm (EOESPA) and Ranking-based Near-optimal Edge Server Placement Algorithm (RNOESPA) are proposed. Performance comparison experiment simulation is conducted with K-Means cluster algorithm (KMCA) to verify the minimum access delay of edge server under different conditions. After the deployment of edge servers, three collaborative resource allocation optimization algorithms of Optimal Enumeration Service Deployment Algorithm (OESDA), Latency Aware Heuristic Service Deployment Algorithm (LAHSDA), and Clustering Enhanced Heuristic Service Deployment Algorithm (CEHSDA) are proposed, and simulation experiments are carried out to verify the performance of the proposed algorithm under different conditions. The results show that, under different conditions, when the number of deployments increases from 1 to 4, the average access delay of EOESPA can be at least 1ms, and the average access delay obtained by RNOESPA is close to the best performance obtained by EOESPA and better than that obtained by KMCA. When the number of network nodes increases to 50, the minimum average access delay obtained by RNOESPA is closer to the optimal value, which is about 1.42ms. The same performance is shown in relation to the average number of requests, the number of mobile devices, and the average access delay. Among the three collaborative resource allocation optimization algorithms, the minimum average response delay obtained by LAHSDA is close to the optimal average response delay obtained by OESDA, but all of them are lower than CEHSDA, and CEHSDA has the best performance in minimizing the total allocation cost. When the number of service types increases to 8, the total service configuration cost of CEHSDA is about 0.89. It can be concluded that by optimizing the deployment of the edge server, the collaborative optimal allocation of its resources can be realized.