Abstract This study focuses on optimizing and designing the Delayed-Fix-Later Awaiting Transmission Encoding (DEFLATE) algorithm to enhance its compression performance and reduce the compression time ...for models, specifically in the context of compressing NX three-dimensional (3D) image models. The DEFLATE algorithm, a dual-compression technique combining the LZ77 algorithm and Huffman coding, is widely employed for compressing multimedia data and 3D models. Three 3D models of varying sizes are selected as subjects for experimentation. The Wavelet algorithm, C-Bone algorithm, and DEFLATE algorithm are utilized for compression, with subsequent analysis of the compression ratio and compression time. The experimental findings demonstrate the DEFLATE algorithm’s exceptional performance in compressing 3D image models. Notably, when compressing small and medium-sized 3D models, the DEFLATE algorithm exhibits significantly higher compression ratios compared to the Wavelet and C-Bone algorithms while also achieving shorter compression times. Compared to the Wavelet algorithm, the DEFLATE algorithm enhances the compression performance of 3D image models by 15% and boosts data throughput by 49%. While the compression ratio of the DEFLATE algorithm for large 3D models is comparable to that of the Wavelet and C-Bone algorithms, it notably reduces the actual compression time. Furthermore, the DEFLATE algorithm enhances data transmission reliability in NX 3D image model compression by 12.1% compared to the Wavelet algorithm. Therefore, the following conclusions are drawn: the DEFLATE algorithm serves as an excellent compression algorithm for 3D image models. It showcases significant advantages in compressing small and medium-sized models while remaining highly practical for compressing large 3D models. This study offers valuable insights for enhancing and optimizing the DEFLATE algorithm, and it serves as a valuable reference for future research on 3D image model compression.
Degenerative compressive myelopathy (DCM) is caused by cervical cord compression. The relationship between the magnitude and clinical findings of cervical cord compression has been described in the ...literature, but the details remain unclear. This study aimed to clarify the relationship between the magnitude and clinical symptoms of cervical cord compression in community-dwelling residents.
The present study included 532 subjects. The subjective symptoms and the objective findings of one board-certified spine surgeon were assessed. The subjective symptoms were upper extremity pain and numbness, clumsy hand, fall in the past 1 year, and subjective gait disturbance. The objective findings were: Hoffmann, Trömner, and Wartenberg signs; Babinski's and Chaddock's signs; hyperreflexia of the patellar tendon and Achilles tendon reflexes; ankle clonus; Romberg and modified Romberg tests; grip and release test; finger escape sign; and grip strength. Using midsagittal T2-weighted magnetic resonance imaging, the anterior-posterior (AP) diameters (mm) of the spinal cord at the C2 midvertebral body level (DC2) and at each intervertebral disc level from C2/3 to C7/T1 (DC2/3-C7/T1) were measured. The spinal cord compression ratio (R) for each intervertebral disc level was defined and calculated as DC2/3-C7/T1 divided by DC2. The lowest R (LR) along C2/3 to C7/T1 of each individual was divided into 3 grades by the tertile method. The relationship between LR and clinical symptoms was investigated by trend analysis.
The prevalence of subjective gait disturbance increased significantly with the severity of spinal cord compression (p = 0.002812), whereas the other clinical symptoms were not significantly related with the severity of spinal cord compression.
The magnitude of cervical cord compression had no relationship with any of the neurologic findings. However, subjective gait disturbance might be a better indicator of the possibility of early stage cervical cord compression.
Great advancements in commodity graphics hardware have favoured graphics processing unit (GPU)‐based volume rendering as the main adopted solution for interactive exploration of rectilinear scalar ...volumes on commodity platforms. Nevertheless, long data transfer times and GPU memory size limitations are often the main limiting factors, especially for massive, time‐varying or multi‐volume visualization, as well as for networked visualization on the emerging mobile devices. To address this issue, a variety of level‐of‐detail (LOD) data representations and compression techniques have been introduced. In order to improve capabilities and performance over the entire storage, distribution and rendering pipeline, the encoding/decoding process is typically highly asymmetric, and systems should ideally compress at data production time and decompress on demand at rendering time. Compression and LOD pre‐computation does not have to adhere to real‐time constraints and can be performed off‐line for high‐quality results. In contrast, adaptive real‐time rendering from compressed representations requires fast, transient and spatially independent decompression. In this report, we review the existing compressed GPU volume rendering approaches, covering sampling grid layouts, compact representation models, compression techniques, GPU rendering architectures and fast decoding techniques.
GPU‐based volume rendering is the major currently adopted solution for interactive exploration of rectilinear scalar volumes on commodity platforms. Nevertheless, long data transfer times and GPU memory size limitations are often the main limiting factors, especially for massive, time‐varying or multi‐volume visualization, as well as for networked visualization on the emerging mobile devices. This article reviews the existing compressed GPU volume rendering approaches, covering sampling grid layouts, compact representation models, compression techniques, GPU rendering architectures and fast decoding techniques.
Acute Spinal Cord Compression Ropper, Alexander E; Ropper, Allan H
The New England journal of medicine,
04/2017, Letnik:
376, Številka:
14
Journal Article
This paper presents a novel signal compression algorithm based on the Blaschke unwinding adaptive Fourier decomposition (AFD). The Blaschke unwinding AFD is a newly developed signal decomposition ...theory. It utilizes the Nevanlinna factorization and the maximal selection principle in each decomposition step, and achieves a faster convergence rate with higher fidelity. The proposed compression algorithm is applied to the electrocardiogram signal. To assess the performance of the proposed compression algorithm, in addition to the generic assessment criteria, we consider the less discussed criteria related to the clinical needs-for the heart rate variability analysis purpose, how accurate the R-peak information is preserved is evaluated. The experiments are conducted on the MIT-BIH arrhythmia benchmark database. The results show that the proposed algorithm performs better than other state-of-the-art approaches. Meanwhile, it also well preserves the R-peak information.
Malignant spinal-cord compression Prasad, Dheerendra; Schiff, David
The lancet oncology,
January 2005, 2005-Jan, 20050101, Letnik:
6, Številka:
1
Journal Article
Recenzirano
Malignant spinal-cord compression (MSCC) is a common complication of cancer and has a substantial negative effect on quality of life and survival. Despite widespread availability of good diagnostic ...technology, studies indicate that most patients are diagnosed only after they become unable to walk. We review the epidemiology, pathophysiology, and clinical features of MSCC. Clinical trials have informed the optimum management of MSCC, and we review the role of corticosteroids, radiotherapy, and surgery in the management of patients. We also emphasise advances in radiation delivery and the results of a randomised trial that supported aggressive debulking in patients with MSCC.
•Pore evolution in coal during loading was investigated based on NMR measurement.•Characteristics of T2 distribution and pores in coals were analyzed during loading.•NMR fractal dimensions of ...stress-damaged coal were measured.
In this paper, triaxial compression tests on coals with real-time T2 and MRI (Magnetic Resonance Imaging) image measurement are performed by using a nuclear magnetic resonance (NMR) testing system equipped with a loading device. Pore and fracture development in coal under stress conditions are investigated based on NMR and fractal theory. The results show that the accumulation of NMR signal intensity as well as sample porosity decrease slightly at first, then increase slowly, and finally increase rapidly during the deformation of coal. The measured T2 distributions indicate that stress damage mainly induces the generation of mesopores and macropores (or micro-fractures). A new method for estimating crack-initiation stresses is proposed based on the evolution of T2 curve. By using this method, we estimate the crack-initiation stress of the tested coal samples, which is 34.6%, 32.2%, and 30.1% of its corresponding peak strength, respectively. Fractal dimensions of seepage pores show significant fractal characteristics, while fractal characteristics of adsorption pores are not obvious. The evolution of fractal dimension DT of the total pores with stress is similar to the changing trend of porosity. But the fractal dimension of DS shows a negative correlation with the stress.
•Low temperature combustion modes (PCCI & RCCI) compared with CI mode.•CI and PCCI modes used diesel and RCCI mode used diesel-methanol fuel-pair.•RCCI mode combustion was relatively more stable than ...CI and PCCI modes.•RCCI mode combustion showed higher BTE than CI and PCCI modes.•RCCI mode combustion emitted lesser NOx, and higher HC than CI and PCCI modes.•RCCI mode can be used in modern production grade diesel engines.
In this study, a comparative investigation of engine combustion, performance and emission characteristics of low temperature combustion modes namely premixed charge compression ignition (PCCI) and reactivity controlled compression ignition (RCCI) with conventional compression ignition (CI) combustion mode was performed. Experiments were performed in a single cylinder research engine at constant engine speed (1500 rpm) and at four different engine loads (1, 2, 3 and 4 bar brake mean effective pressure (BMEP)). Baseline CI and PCCI mode combustion experiments were performed using mineral diesel as test fuel; while mineral diesel-methanol fuel pair was used as high-reactivity fuel (HRF) and low-reactivity fuel (LRF) respectively in RCCI mode combustion. Results showed that RCCI mode combustion was relatively more stable compared to baseline CI and PCCI combustion modes. At higher engine loads, RCCI mode combustion exhibited relatively lower knocking and combustion noise than other combustion modes. Performance characteristics showed that brake thermal efficiency (BTE) of RCCI mode combustion was comparable to baseline CI mode combustion, however, at higher engine loads, RCCI mode combustion resulted in relatively higher BTE compared to both baseline CI and PCCI combustion modes. Significantly lower EGT of RCCI mode combustion compared to baseline CI as well as PCCI combustion modes was another important finding of this study. Emission results showed that RCCI mode combustion emitted relatively lower oxides of nitrogen (NOx), but significantly higher hydrocarbons (HC) compared to baseline CI and PCCI combustion modes. A NOx-BTE trade-off analysis was also carried out, which demonstrated the suitability of RCCI mode combustion at all engine loads. Finally, a parametric analysis was carried out to compare the critical parameters of baseline CI, PCCI, and RCCI combustion modes at low and high engine loads, which exhibited improved engine performance and emission characteristics of low temperature combustion (LTC) modes, especially the RCCI mode combustion.
Built on deep networks, end-to-end optimized image compression has made impressive progress in the past few years. Previous studies usually adopt a compressive auto-encoder, where the encoder part ...first converts image into latent features, and then quantizes the features before encoding them into bits. Both the conversion and the quantization incur information loss, resulting in a difficulty to optimally achieve arbitrary compression ratio. We propose iWave++ as a new end-to-end optimized image compression scheme, in which iWave, a trained wavelet-like transform, converts images into coefficients without any information loss. Then the coefficients are optionally quantized and encoded into bits. Different from the previous schemes, iWave++ is versatile : a single model supports both lossless and lossy compression, and also achieves arbitrary compression ratio by simply adjusting the quantization scale. iWave++ also features a carefully designed entropy coding engine to encode the coefficients progressively, and a de-quantization module for lossy compression. Experimental results show that lossy iWave++ achieves state-of-the-art compression efficiency compared with deep network-based methods; on the Kodak dataset, lossy iWave++ leads to 17.34 percent bits saving over BPG; lossless iWave++ achieves comparable or better performance than FLIF. Our code and models are available at https://github.com/mahaichuan/Versatile-Image-Compression .
Photoacoustic microscopic images can assist specialists in disease diagnosis by providing vascular information. However, the size of such data is usually extremely large (ie, gigabytes), and thus, a ...real-time, efficient compression method can facilitate easy storage and transportation of these images. We have implemented multiple data compression methods in LabVIEW with a high compression ratio and execution times below the repetition rate of the pulsed laser. The qualitative and quantitative results of ex vivo and in vivo imaging with compression showed near-identical images to uncompressed images, with significantly smaller size.