The Environmental Protection Administration of Taiwan’s Executive Yuan has set up many air quality monitoring stations to monitor air pollution in the environment. The current weather forecast also ...includes information used to predict air pollution. Since air quality indicators have a considerable impact on people, the development of a simple, fast, and low-cost method to measure the AQI value is a worthy topic of research. In this study, a method was proposed to estimate AQI. Visibility had a clear positive relationship with AQI. When images and AQI were compared, it was easy to see that visibility decreased with the AQI value increase. Distance is the main factor affecting visibility, so measuring visibility with images has also become a research topic. Images with high and low PM2.5 concentrations were used to obtain regions of interest (RoI). The pixels in the RoI were calculated to obtain high-frequency information. The high-frequency information of RoI, RH, and true AQI was used for training via SVR, which was used to generate the model for AQI estimation. One year of experimental samples was collected for the experiment. Two indices were used to evaluate the performance of the proposed method. The results showed that the proposed method could be used to estimate AQI with acceptable performance in a simple, fast, and low-cost way.
Images captured in a hazy environment usually suffer from bad visibility and missing information. Over many years, learning-based and handcrafted prior-based dehazing algorithms have been rigorously ...developed. However, both algorithms exhibit some weaknesses in terms of haze removal performance. Therefore, in this work, we have proposed the patch-map-based hybrid learning DehazeNet, which integrates these two strategies by using a hybrid learning technique involving the patch map and a bi-attentive generative adversarial network. In this method, the reasons limiting the performance of the dark channel prior (DCP) have been analyzed. A new feature called the patch map has been defined for selecting the patch size adaptively. Using this map, the limitations of the DCP (e.g., color distortion and failure to recover images involving white scenes) can be addressed efficiently. In addition, to further enhance the performance of the method for haze removal, a patch-map-based DCP has been embedded into the network, and this module has been trained with the atmospheric light generator, patch map selection module, and refined module simultaneously. A combination of traditional and learning-based methods can efficiently improve the haze removal performance of the network. Experimental results show that the proposed method can achieve better reconstruction results compared to other state-of-the-art haze removal algorithms.
Fitness is important in people’s lives. Good fitness habits can improve cardiopulmonary capacity, increase concentration, prevent obesity, and effectively reduce the risk of death. Home fitness does ...not require large equipment but uses dumbbells, yoga mats, and horizontal bars to complete fitness exercises and can effectively avoid contact with people, so it is deeply loved by people. People who work out at home use social media to obtain fitness knowledge, but learning ability is limited. Incomplete fitness is likely to lead to injury, and a cheap, timely, and accurate fitness detection system can reduce the risk of fitness injuries and can effectively improve people’s fitness awareness. In the past, many studies have engaged in the detection of fitness movements, among which the detection of fitness movements based on wearable devices, body nodes, and image deep learning has achieved better performance. However, a wearable device cannot detect a variety of fitness movements, may hinder the exercise of the fitness user, and has a high cost. Both body-node-based and image-deep-learning-based methods have lower costs, but each has some drawbacks. Therefore, this paper used a method based on deep transfer learning to establish a fitness database. After that, a deep neural network was trained to detect the type and completeness of fitness movements. We used Yolov4 and Mediapipe to instantly detect fitness movements and stored the 1D fitness signal of movement to build a database. Finally, MLP was used to classify the 1D signal waveform of fitness. In the performance of the classification of fitness movement types, the mAP was 99.71%, accuracy was 98.56%, precision was 97.9%, recall was 98.56%, and the F1-score was 98.23%, which is quite a high performance. In the performance of fitness movement completeness classification, accuracy was 92.84%, precision was 92.85, recall was 92.84%, and the F1-score was 92.83%. The average FPS in detection was 17.5. Experimental results show that our method achieves higher accuracy compared to other methods.
A multipath output-capacitor-less low-dropout (OCL-LDO) regulator with feedforward path compensation is presented to achieve low power consumption and fast transient response. The proposed OCL-LDO ...does not require output capacitance and remains stable at no-load(under 100nA) condition. The proposed OCL-LDO regulator has been implemented and fabricated in a <inline-formula> <tex-math notation="LaTeX">0.18\mu \text{m} </tex-math></inline-formula> CMOS process, it occupies an active area of 0.0128mm 2 . The proposed circuit consumes a quiescent current of <inline-formula> <tex-math notation="LaTeX">0.6\mu \text{A} </tex-math></inline-formula> at no load and <inline-formula> <tex-math notation="LaTeX">6.9\mu \text{A} </tex-math></inline-formula> at the maximum current load current. Regulating the output at 1V from a voltage supply of 1.2V. It achieves full range stability from 100nA to 100mA, and 100pF is the maximum tolerable parasitic capacitance at output. The measurement results show that the load current rises from 0 to 100mA in 100ns, the undershoot voltage is 388mV, and the settling time is <inline-formula> <tex-math notation="LaTeX">2.2\mu \text{s} </tex-math></inline-formula>.
: Mesenchymal chondrosarcoma is a rare but aggressive subtype of sarcoma. The majority of involvement locates in the axial skeleton. Treatment modalities include radical surgery, local radiotherapy, ...and systemic chemotherapy. However, the long-term survival outcome remains poor.
: We present the case of a 33-year-old male with a palpable chest wall mass for one year, diagnosed with mesenchymal chondrosarcoma with surgical removal. Later, he had an unusual pancreatic tail tumor as the first presentation of disease metastasis which was proven by surgical resection one year later.
: Although mesenchymal chondrosarcoma locates mainly in the axial skeletal system, extra-skeletal soft tissue or organ involvement might be seen occasionally. Active surveillance with multidisciplinary team management could significantly prolong survival outcomes.
In image processing, smoke may degrade visibility and deteriorate the performance of high-level vision applications. Therefore, single image smoke removal is crucial for computer vision. Currently, ...existing smoke removal algorithms mainly leverage handcrafted priors. Moreover, these methods usually apply haze removal methods to perform smoke removal due to the similarity between smoke and haze. However, these methods cannot sufficiently address the degradation of thick smoke and may suffer from residual smoke and color distortion problems due to the non-global and non-homogeneous distribution of smoke. In this paper, to solve the aforementioned problems, an end-to-end deep neural network called DesmokeNet is proposed. We construct a two-stage recovered pipeline to remove the smoke in different thicknesses. The light and thick smoke is first removed locally by the smoke removal network (SRN). The missing pixels in the thick smoke are then recovered by the pixel compensation network (PCN). Moreover, we proposed the thickness-aware pixel loss and the dark channel loss to suppress the residual smoke. To further increase the discriminative ability of the DesmokeNet, we proposed self-attentive feature consensus loss and multi-level contrastive regularization loss to improve the performance of smoke removal. Finally, to train the proposed method, we construct the first large-scale dataset containing synthetic and real-world data. Extensive experiments show that the proposed method outperforms favorably against other state-of-the-art methods quantitatively and qualitatively.
In this paper, we proposed a novel haze removal algorithm based on a new feature called the patch map. Conventional patch-based haze removal algorithms (e.g. the Dark Channel prior) usually performs ...dehazing with a fixed patch size. However, it may produce several problems in recovered results such as oversaturation and color distortion. Therefore, in this paper, we designed an adaptive and automatic patch size selection model called the Patch Map Selection Network (PMS-Net) to select the patch size corresponding to each pixel. This network is designed based on the convolutional neural network (CNN), which can generate the patch map from the image to image. Experimental results on both synthesized and real-world hazy images show that, with the combination of the proposed PMS-Net, the performance in haze removal is much better than that of other state-of-the-art algorithms and we can address the problems caused by the fixed patch size.
Photographs taken through a glass window are susceptible to disturbances due to reflection. Therefore, single image reflection removal is crucial to image quality enhancement. In this paper, a novel ...learning architecture that can address this ill-posed problem is proposed. First, a novel reflection removal pipeline was designed to reconstruct the missing information caused by the camera imaging process using the proposed missing recovery network. Second, to address the issues in existing reflection removal strategies, we revisit several auxiliary priors and integrate them by defining an energy function. To solve the energy function, a convolutional neural network-based optimization scheme was proposed. Finally, we investigated the dark channel responses of reflection and clean images and found an interesting way to distinguish between these two types of images. We prove this property mathematically and propose a novel loss function called dark channel loss to improve performance. Experiments show that the proposed method outperforms state-of-the-art reflection removal methods both quantitatively and qualitatively.
The motivation of this paper is to address the limitations of the conventional keypoint-based disparity estimation methods. Conventionally, disparity estimation is usually based on the local ...information of keypoints. However, keypoints may distribute sparsely in the smooth region, and keypoints with the same descriptors may appear in a symmetric pattern. Therefore, conventional keypoint-based disparity estimation methods may have limited performance in smooth and symmetric regions. The proposed algorithm is superpixel-based. Instead of performing keypoint matching, both keypoint and semiglobal information are applied to determine the disparity in the proposed algorithm. Since the local information of keypoints and the semi-global information of the superpixel are both applied, the accuracy of disparity estimation can be improved, especially for smooth and symmetric regions. Moreover, to address the non-uniform distribution problem of keypoints, a disparity refining mechanism based on the similarity and the distance of neighboring superpixels is applied to correct the disparity of the superpixel with no or few keypoints. The experiments show that the disparity map generated by the proposed algorithm has a lower matching error rate than that generated by other methods.
Context-based adaptive arithmetic coding (CAAC) has high coding efficiency and is adopted by the majority of advanced compression algorithms. In this paper, five new techniques are proposed to ...further improve the performance of CAAC. They make the frequency table (the table used to estimate the probability distribution of data according to the past input) of CAAC converge to the true probability distribution rapidly and hence improve the coding efficiency. Instead of varying only one entry of the frequency table, the proposed range-adjusting scheme adjusts the entries near to the current input value together. With the proposed mutual-learning scheme, the frequency tables of the contexts highly correlated to the current context are also adjusted. The proposed increasingly adjusting step scheme applies a greater adjusting step for recent data. The proposed adaptive initialization scheme uses a proper model to initialize the frequency table. Moreover, a local frequency table is generated according to local information. We perform several simulations on edge-directed prediction-based lossless image compression, coefficient encoding in JPEG, bit plane coding in JPEG 2000, and motion vector residue coding in video compression. All simulations confirm that the proposed techniques can reduce the bit rate and are beneficial for data compression.