The problem of domain generalization is to learn from multiple training domains, and extract a domain-agnostic model that can then be applied to an unseen domain. Domain generalization (DG) has a ...clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos. Nevertheless, DG methods have primarily been evaluated on photo-only benchmarks focusing on alleviating the dataset bias where both problems of domain distinctiveness and data sparsity can be minimal. We argue that these benchmarks are overly straightforward, and show that simple deep learning baselines perform surprisingly well on them. In this paper, we make two main contributions: Firstly, we build upon the favorable domain shift-robust properties of deep learning methods, and develop a low-rank parameterized CNN model for end-to-end DG learning. Secondly, we develop a DG benchmark dataset covering photo, sketch, cartoon and painting domains. This is both more practically relevant, and harder (bigger domain shift) than existing benchmarks. The results show that our method outperforms existing DG alternatives, and our dataset provides a more significant DG challenge to drive future research.
We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called ...the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.
Deep convolutional neural networks have recently proven extremely effective for difficult face recognition problems in uncontrolled settings. To train such networks, very large training sets are ...needed with millions of labeled images. For some applications, such as near-infrared (NIR) face recognition, such large training data sets are not publicly available and difficult to collect. In this paper, we propose a method to generate very large training data sets of synthetic images by compositing real face images in a given data set. We show that this method enables to learn models from as few as 10 000 training images, which perform on par with models trained from 500 000 images. Using our approach, we also obtain state-of-the-art results on the CASIA NIR-VIS2.0 heterogeneous face recognition data set.
We propose to model complex visual scenes using a non-parametric Bayesian model learned from weakly labelled images abundant on media sharing sites such as Flickr. Given weak image-level annotations ...of objects and attributes without locations or associations between them, our model aims to learn the appearance of object and attribute classes as well as their association on each object instance. Once learned, given an image, our model can be deployed to tackle a number of vision problems in a joint and coherent manner, including recognising objects in the scene (automatic object annotation), describing objects using their attributes (attribute prediction and association), and localising and delineating the objects (object detection and semantic segmentation). This is achieved by developing a novel Weakly Supervised Markov Random Field Stacked Indian Buffet Process (WS-MRF-SIBP) that models objects and attributes as latent factors and explicitly captures their correlations within and across superpixels. Extensive experiments on benchmark datasets demonstrate that our weakly supervised model significantly outperforms weakly supervised alternatives and is often comparable with existing strongly supervised models on a variety of tasks including semantic segmentation, automatic image annotation and retrieval based on object-attribute associations.
A common strategy adopted by existing state-of-the-art unsupervised domain adaptation (UDA) methods is to employ two classifiers to identify the misaligned local regions between source and target ...domain. Following the 'wisdom of the crowd' principle, one has to ask: why stop at two? Indeed, we find that using more classifiers leads to better performance, but also introduces more model parameters, therefore risking overfitting. In this paper, we introduce a novel method called STochastic clAssifieRs (STAR) for addressing this problem. Instead of representing one classifier as a weight vector, STAR models it as a Gaussian distribution with its variance representing the inter-classifier discrepancy. With STAR, we can now sample an arbitrary number of classifiers from the distribution, whilst keeping the model size the same as having two classifiers. Extensive experiments demonstrate that a variety of existing UDA methods can greatly benefit from STAR and achieve the state-of-the-art performance on both image classification and semantic segmentation tasks.
During the erection stage of the suspension bridge, its critical flutter wind speed is continuously varying due to changing structural dynamic properties including mass and stiffness. On the other ...hand, the wind climate is also varying greatly seasonally or even monthly during the bridge erection procedure. This work proposes an analytical framework to optimize the deck erection timeline under complex wind climate attacks. Xihoumen Bridge built at Chinese southeastern coast is taken as a calculation example. Full-bridge aeroelastic model wind tunnel test was conducted to examine the critical flutter speed at different erection stages. Extreme wind speed for tropical cyclones and synoptic wind are analyzed by tropical cyclone simulation and meteorological records respectively. Results show that the optimal timeline for Xihoumen Bridge is starting in August if a one-month duration is required for each stage. However, if the worst timeline is selected, the flutter risk will be increased 40 times larger. If the construction timeline is flexible for the construction schedule, this research proves that timeline optimization is a more economical and safe approach than structural approaches like storm ropes.
•An framework to optimize the optimal deck erection timeline under complex wind climate attack.•Deck erection timeline plays a key role for flutter risk. 40 times difference on flutter risk exists between the best and worst timeline.•Construction timeline optimization is a more economical and safe approach to reduce flutter risk than structural approaches like storm ropes.
Display omitted
•UST treatment could improve MP solubility and reduce MP particle size.•UST treatment could effectively inhibit MP oxidation.•UST treatment could effectively reduce damage MP ...structure.
The effects of air thawing (AT), water immersion thawing (WT), microwave thawing (MT) and ultrasound combined with slightly acidic electrolyzed water thawing (UST) on the myofibrillar protein (MP) properties (surface hydrophobicity, solubility, turbidity, particle size and zeta potential), protein oxidation (carbonyl content and sulfhydryl content) and structure (primary, secondary and tertiary) of frozen mutton were investigated in comparison with fresh mutton (FM). The solubility and turbidity results showed that the MP properties were significantly improved in the UST treatment. UST treatment could effectively reduce the MP aggregation and enhance the stability, which was similar to the FM. In addition, UST treatment could effectively inhibit protein oxidation during thawing as well. The primary structure of MP was not damaged by the thawing methods. UST treatment could reduce the damage to MP secondary and tertiary structure during the thawing process compared to other thawing methods. Overall, the UST treatment had a positive influence in maintaining the MP properties by inhibiting protein oxidation and protecting protein structure.
Casein micelles (CM) play an important role in milk secretion, stability, and processing. The composition and content of milk proteins are affected by physiological factors, which have been widely ...investigated. However, the variation in CM proteins in goat milk throughout the lactation cycle has yet to be fully clarified. In the current study, milk samples were collected at d 1, 3, 30, 90, 150, and 240 of lactation from 15 dairy goats. The size of CM was determined using laser light scattering, and CM proteins were separated, digested, and identified using data-independent acquisition (DIA) and data-dependent acquisition (DDA)-based proteomics approaches. According to clustering and principal component analysis, protein profiles identified using DIA were similar to those identified using the DDA approach. Significant differences in the abundance of 115 proteins during the lactation cycle were identified using the DIA approach. Developmental changes in the CM proteome corresponding to lactation stages were revealed: levels of lecithin cholesterol acyltransferase, folate receptor α, and prominin 2 increased from 1 to 240 d, whereas levels of growth/differentiation factor 8, peptidoglycan-recognition protein, and 45 kDa calcium-binding protein decreased in the same period. In addition, lipoprotein lipase, glycoprotein IIIb, and α-lactalbumin levels increased from 1 to 90 d and then decreased to 240 d, which is consistent with the change in CM size. Protein–protein interaction analysis showed that fibronectin, albumin, and apolipoprotein E interacted more with other proteins at the central node. These findings indicate that changes in the CM proteome during lactation could be related to requirements of newborn development, as well as mammary gland development, and may thus contribute to elucidating the physical and chemical properties of CM.
Food nutrition, function, sensory quality and safety became major concerns to the food industry. As a novel technology application in food industry, low temperature plasma was commonly used in the ...sterilization of heat sensitive materials and is now widely used. This review provides a detailed study of the latest advancements and applications of plasma technology in the food industry, especially the sterilization field; influencing factors and the latest research progress in recent years are outlined and upgraded. It explores the parameters that influence its efficiency and effectiveness in the sterilization process. Further research trends include optimizing plasma parameters for different food types, investigating the effects on nutritional quality and sensory attributes, understanding microbial inactivation mechanisms, and developing efficient and scalable plasma-based sterilization systems. Additionally, there is growing interest in assessing the overall quality and safety of processed foods and evaluating the environmental sustainability of plasma technology. The present paper highlights recent developments and provides new perspectives for the application of low temperature plasma in various areas, especially sterilization field of the food industry. Low temperature plasma holds great promise for the food industry's sterilization needs. Further research and technological advancements are required to fully harness its potential and ensure safe implementation across various food sectors.