Understanding developmental changes in children's use of specific visual information for recognizing object categories is essential for understanding how experience shapes recognition. Research on ...the development of face recognition has focused on children's use of low-level information (e.g. orientation sub-bands), or high-level information. In face categorization tasks, adults also exhibit sensitivity to intermediate complexity features that are diagnostic of the presence of a face. Do children also use intermediate complexity features for categorizing faces and objects, and, if so, how does their sensitivity to such features change during childhood? Intermediate-complexity features bridge the gap between low- and high-level processing: they have computational benefits for object detection and segmentation, and are known to drive neural responses in the ventral visual system. Here, we have investigated the developmental trajectory of children's sensitivity to diagnostic category information in intermediate-complexity features. We presented children (5-10 years old) and adults with image fragments of faces (Experiment 1) and cars (Experiment 2) varying in their mutual information, which quantifies a fragment's diagnosticity of a specific category. Our goal was to determine whether children were sensitive to the amount of mutual information in these fragments, and if their information usage is different from adults. We found that despite better overall categorization performance in adults, all children were sensitive to fragment diagnosticity in both categories, suggesting that intermediate representations of appearance are established early in childhood. Moreover, children's usage of mutual information was not limited to face fragments, suggesting the extracting intermediate-complexity features is a process that is not specific only to faces. We discuss the implications of our findings for developmental theories of face and object recognition.
The carpet industry is no longer a small business done in the villages on a small scale; instead, it has carved an altogether different space, identity, and appreciation for itself in the ...cosmopolitan world. As computers are is becoming more and more ubiquitous, most industries, including the carpet industry, use computers for quality improvement, accuracy enhancement, speed development, and cost reduction purposes. Unlike traditional carpet maps, many modern maps include images of human faces for hand‐woven carpet tableau. These digital images comprise millions of colors and thousands of pixels, making it practically impossible to construct and weave the carpet in the same dimensions. Many weavers currently use manual and experience‐based methods for reducing the size and number of hues for making a hand‐woven carpet tableau map. Therefore, the outcomes are not the optimal results and can be improved. Also, many color reduction methods do not focus on the hand‐woven carpet tableau map. To overcome these problems and gaps, this research focuses on proposing a new automatic method for reducing the size of color images without compromising facial nuances, lessening the number of colors used while protecting the important areas of the images, and transforming those images into carpet tableau maps. The proposed approach inputs the original color image. It continuously detecting the face and specifying important areas, and finally, outputs carpet tableau map that is proportional to the given dimensions and color count. To evaluate the proposed method, MATLAB, as a powerful simulation tool, was employed. Final results are compared to the existing approaches in terms of face detection, size reduction, and color quantization. The obtained results have shown that the approach improves speed by 39% in face detection and increases the precision of size reduction and color quantization phases. The results have also confirmed that when images of human faces are reduced by a proposed method to form an appropriate image for tableau maps, they are nearly always perceived as more attractive than the reduced faces via traditional methods.
In this paper, we describe a statistical method for 3D object detection. We represent the statistics of both object appearance and "non-object" appearance using a product of histograms. Each ...histogram represents the joint statistics of a subset of wavelet coefficients and their position on the object. Our approach is to use many such histograms representing a wide variety of visual attributes. Using this method, we have developed the first algorithm that can reliably detect human faces with out-of-plane rotation and the first algorithm that can reliably detect passenger cars over a wide range of viewpoints.
Today, face recognition research is popular owing to its potential applications, especially where privacy and security are involved. Many methods of deep learning can extract many complicated face ...features. Convolutional Neural Network (CNN) is normally used for face and image recognition. The CNN is a type of Artificial Neural Network (ANN) employing a convolution methodology that extracts features from input data for increasing the actual number of features. In this work, a Region-based Fully CNN (R-FCN) based framework for face detection is proposed. The R-FCN refers to a completely convolutional structure using a new position-sensitive pooling layer that extracts a score for the prediction of each such region. This helps in speeding up the network and sharing the computation of Region of Interests (RoIs), thus preventing the loss of information by the feature map in RoI-pooling. In this work, a hybrid Grammatical Evolution (GE) with a Grey Wolf Optimizer (GWO) (GE-GWO) algorithm has been proposed for optimizing the R-FCN structure to enhance face detection. The WIDER face dataset with a Face Detection Dataset and Benchmark (FDDB) was employed to evaluate techniques. The results have proved that the proposed technique achieves better performance (precision, recall, and ROC curve) than other existing methods in the range of 1.5–4.2%.
This paper presents a simple and fast recognition system with various facial expressions, poses, and rotation. The proposed system performed in two phases. Face detection is the first phase. The ...front and profile face detected cropped face area from the image by Viola-Jones algorithm and the right side face is detected from the image by taking the flip of the profile image. Principal component analysis (eigenfaces) algorithm is used in the recognition phase and depends on created database models used to be compared with test face image input to the recognition procedure. For training and testing the system, two sets of the image of the file exchange interface (FEI) database have been used to identify the person. The experimental result shows the effectiveness and robustness of the method used for the detection of the face and achieves high accuracy of 96%, which improves the recognition performance with low execution time. Furthermore, the accuracy of 35 trained images for recognition is 97.143% with average time execution which is (0.323657s). Also, the accuracy of 15 tested images for recognition is 93.315% with average time execution which is (0.3348s) which indicates a good and strong success and accuracy method for facial recognition.
•The comprehensive recognition accuracy of facial expression recognition is 80.17%.•The read, write, and update times for each frame of the virtual reality interaction system are 102.6 ms and 427 ms, ...respectively.•The system memory usage is 920.5 M, and the CPU and GPU usage rates are 25.2 % and 4.4 %, respectively.•The system can run smoothly. The virtual reality system can run continuously for 24 h, with strong operability and low difficulty in expanding system functions, achieving design goals.
The development of technology has intensified people’s interest in virtual reality technology. Virtual reality technology is also widely used in fields such as teaching and healthcare. To improve the real-time and operability of virtual reality interaction systems, a motion capture and virtual reality interaction system design for online experimental teaching is proposed. The active apparent model is used for facial feature point localization, and the joint capture method is improved by combining threshold segmentation and forward kinematics algorithms. Experimental data confirms that compared to Kinect method, the improved hand joint detection method has higher joint positioning accuracy, with a loss rate of less than 40 % for hand joints. The average judgment accuracy of the six facial expression tests is 82 %, 81 %, 79 %, 78 %, 80 %, and 81 %, respectively. The comprehensive recognition accuracy of facial expression recognition is 80.17 %. The read, write, and update times for each frame of the virtual reality interaction system are 102.6 ms and 427 ms, respectively. The system memory usage is 920.5 M, and the CPU and GPU usage rates are 25.2 % and 4.4 %, respectively. The system can run smoothly. The virtual reality system can run continuously for 24 h, with strong operability and low difficulty in expanding system functions, achieving design goals.
Lip-reading technology captures the content of the speaker by analyzing the characteristics of the mouth movement. It has a wide application prospect in the fields of daily life, security and so on. ...The training of the lip-reading model relies on a large amount of data, and the construction of the lip-reading dataset is the first step of lip-reading. The quality of the dataset greatly affects the work of the whole lip-reading system. Therefore, this paper carry out research on the construction of lip-reading dataset. First of all, frames are extracted from original videos by using the Scikit-Video. Then face detection is performed by applying dlib. Lip images are captured by processing the feature points to achieve lip cropping. Finally, data augmentation is performed to enlarged the dataset. The resulting dataset has 33 speakers, each with 7,000 pictures of their lips.
With the advancement of deep learning technology, the importance of utilizing deep learning for livestock management is becoming increasingly evident. goat face detection provides a foundation for ...goat recognition and management. In this study, we proposed a novel neural network specifically designed for goat face object detection, addressing challenges such as low image resolution, small goat face targets, and indistinct features. By incorporating contextual information and feature-fusion complementation, our approach was compared with existing object detection networks using evaluation metrics such as F1-Score (F1), precision (P), recall (R), and average precision (AP). Our results show that there are 8.07%, 0.06, and 6.8% improvements in AP, P, and R, respectively. The findings confirm that the proposed object detection network effectively mitigates the impact of small targets in goat face detection, providing a solid basis for the development of intelligent management systems for modern livestock farms.
There are so many security devices such as code pin, dual control procedures, and ID card. However, those devices have the potential to be lost, stolen or duplicated by someone. Due to that reason, ...the authentication for driver by using their face is one of the potential solutions. In this proposed system, the real time person’s face is detected using a camera. The detected face is then processed inthe system which will recognize. The recognized face is then used as input to the Arduino, which connected to the automotive relay to active the engine’s starter vehicle. Viola-Jones method is applied as a method to detect and crop face area of the face image. The Canny edge method is applied as a method for segmenting the detected face. The Canny edge method will detect a wide range of edges from facealso reducing noise image. Fast Fourier Transform is applied as a feature extraction technique to extract the segmented face image. The extracted image is then used as input to the Artificial Neural Network (ANN) in order to recognize the face of the authorize person. Experimental results show the training and testing accuracy of 100 % and 100 %, respectively.
Unconstrained Human face Tracking in live Video B B, Ramakrishna; Kumari, M. Sharmila
International journal of recent technology and engineering,
09/2019, Volume:
8, Issue:
3
Journal Article
Open access
In surveillance applications visual face detection and tracking becomes an essential task. Many algorithms and technologies have been developed to automatically monitor pedestrians or other moving ...objects and to track the detected face. One main difficulty in face tracking, among many others, is to choose suitable features and models for detecting and tracking the target. For tracking of faces there are some common features are considered like color, intensity, shape and feature points. In this paper we discuss about mean shift based face tracking based on the color, optical flow tracking based on the intensity and motion, SIFT face tracking based on scale invariant local feature points. Mean shift is then combined with local feature points. Initial results from tries have shown that the implemented method is able to track target face with different pose variation, rotation, partial occlusion and deformation.