Melanoma is an uncommon and dangerous type of skin cancer. Dermoscopic imaging aids skilled dermatologists in detection, yet the nuances between melanoma and non-melanoma conditions complicate ...diagnosis. Early identification of melanoma is vital for successful treatment, but manual diagnosis is time-consuming and requires a dermatologist with training. To overcome this issue, this article proposes an Optimized Attention-Induced Multihead Convolutional Neural Network with EfficientNetV2-fostered melanoma classification using dermoscopic images (AIMCNN-ENetV2-MC). The input pictures are extracted from the dermoscopic images dataset. Adaptive Distorted Gaussian Matched Filter (ADGMF) is used to remove the noise and maximize the superiority of skin dermoscopic images. These pre-processed images are fed to AIMCNN. The AIMCNN-ENetV2 classifies acral melanoma and benign nevus. Boosted Chimp Optimization Algorithm (BCOA) optimizes the AIMCNN-ENetV2 classifier for accurate classification. The proposed AIMCNN-ENetV2-MC is implemented using Python. The proposed approach attains an outstanding overall accuracy of 98.75%, less computation time of 98 s compared with the existing models.
Lossy image compression has been gaining importance in recent years due to the enormous increase in the volume of image data employed for Internet and other applications. In a lossy compression, it ...is essential to ensure that the compression process does not affect the quality of the image adversely. The performance of a lossy compression algorithm is evaluated based on two conflicting parameters, namely, compression ratio and image quality which is usually measured by PSNR values. In this paper, a new lossy compression method denoted as PE-VQ method is proposed which employs prediction error and vector quantization (VQ) concepts. An optimum codebook is generated by using a combination of two algorithms, namely, artificial bee colony and genetic algorithms. The performance of the proposed PE-VQ method is evaluated in terms of compression ratio (CR) and PSNR values using three different types of databases, namely, CLEF med 2009, Corel 1 k and standard images (Lena, Barbara etc.). Experiments are conducted for different codebook sizes and for different CR values. The results show that for a given CR, the proposed PE-VQ technique yields higher PSNR value compared to the existing algorithms. It is also shown that higher PSNR values can be obtained by applying VQ on prediction errors rather than on the original image pixels.
The generation of high volume of medical images in recent years has increased the demand for more efficient compression methods to cope up with the storage and transmission problems. In the case of ...medical images, it is important to ensure that the compression process does not affect the image quality adversely. In this paper, a predictive image coding method is proposed which preserves the quality of the medical image in the diagnostically important region (DIR) even after compression. In this method, the image is initially segmented into two portions, namely DIR and non-DIR portions, using a graph-based segmentation procedure. The prediction process is implemented using two identical feed-forward neural networks (FF-NNs) at the compression and decompression stages. Gravitational search and particle swarm algorithms are used for training the FF-NNs. Prediction is performed in both a lossless (LLP) and near-lossless (NLLP) manner for evaluating the performances of the two FF-NN training algorithms. The prediction error sequence which is the difference between the actual and predicted pixel values is further compressed using a Markov model-based arithmetic coding. The proposed method is tested using CLEF med 2009 database. The experimental results demonstrate that the proposed method is equipped for compressing the medical images with minimum degradation in the image quality. It is found that the gravitational search method achieves higher PSNR values compared to the particle swarm and backpropagation methods.
This investigation provides a methodology for surface quality measurement. In machine based vision, an optical inspection is validated to identify defects over materials. As well, normalization ...approach is used to process homogeneous thickness. With compensation procedures flaws are identified and analyzed. However, after defect identification, decision rules are defected for appropriate classification which offers optimal performance and diminishes tuning complexity. The anticipated approach is effectual and fulfils inspection requirements. Experimental outcomes may validate performance of anticipated approach to recognition rate and inspection speed.
Deep Learning is an effective technique and used in various fields of natural language processing, computer vision, image processing and machine vision. Deep fakes uses deep learning technique to ...synthesis and manipulate image of a person in which human beings cannot distinguish the fake one. By using generative adversarial neural networks (GAN) deep fakes are generated which may threaten the public. Detecting deep fake image content plays a vital role. Many research works have been done in detection of deep fakes in image manipulation. The main issues in the existing techniques are inaccurate, consumption time is high. In this work we implement detecting of deep fake face image analysis using deep learning technique of fisherface using Local Binary Pattern Histogram (FF-LBPH). Fisherface algorithm is used to recognize the face by reduction of the dimension in the face space using LBPH. Then apply DBN with RBM for deep fake detection classifier. The public data sets used in this work are FFHQ, 100K-Faces DFFD, CASIA-WebFace.
In this paper, an automated system for grading the severity level of Diabetic Retinopathy (DR) disease based on fundus images is presented. Features are extracted using fast discrete curvelet ...transform. These features are applied to hierarchical support vector machine (SVM) classifier to obtain four types of grading levels, namely, normal, mild, moderate and severe. These grading levels are determined based on the number of anomalies such as microaneurysms, hard exudates and haemorrhages that are present in the fundus image. The performance of the proposed system is evaluated using fundus images from the Messidor database. Experiment results show that the proposed system can achieve an accuracy rate of 86.23%.
Image enhancement (IE) is a process which improves the contrast of image by sharpening the edge pixels intensity. This technique has attained much attention in medical field and several enhancement ...techniques are proposed by researchers. In image processing, the enhancement is regarded as complex optimization issues. This work introduces an efficient model to solve optimization issues using a modified optimization approach. Initially, the input medical images are denoised using Modified median filter (MMF) filter. Then these denoised images are enhanced for the further process. The enhancement is carried out by pixel intensity of image. The parameters like entropy, edge information and intensity are optimized by modified sun flower optimization (MSFO). This optimization is used for increasing the convergence speed. The overall evaluation is carried in Matlab platform. The image quality is analyzed on six performance metrics and compared over several approaches and provided better results. The experimentation is evaluated on five medical images and the Mean square error (MSE) and peak signal noise ratio (PSNR) achieved by the medical image 1 are 0.02 and 43.7 respectively.
The topic of image compression and retrieval has become one of the most researched areas in the recent years due to the acute demand for storage and transmission of large volume of image data that ...are generated in the Internet and other applications. When compressing an image, it is necessary to satisfy two conflicting requirements, namely, compression ratio (CR) and the image quality which is usually measured by the parameter, peak signal-to- noise ratio (PSNR). In this thesis, several lossless and lossy image compression techniques as well as an integrated image retrieval system are proposed using prediction and wavelet based techniques. Employing prediction errors instead of the actual image pixels for compression and retrieval processes ensures data security. A lossless algorithm (LLA) is proposed which uses neural network predictors and entropy encoding. Classification is performed as a pre-processing step to improve the compression ratio. For this purpose, classification algorithm1(CL1) and classification algorithm2(CL2) which make use of wavelet based contourlet transform coefficients and Fourier descriptors as features are proposed. Two identical artificial neural networks (ANNs) are employed at the compression (sending) and decompression (receiving) sides to carry out the prediction. The prediction error which is the difference between the original and the predicted pixel values is used instead of the actual image pixels. The prediction is performed in a lossless manner by rounding-off the predicted values to the nearest integer values at both sides.