Embryo selection within in vitro fertilization (IVF) is the process of evaluating qualities of fertilized oocytes (embryos) and selecting the best embryo(s) available within a patient cohort for ...subsequent transfer or cryopreservation. In recent years, artificial intelligence (AI) has been used extensively to improve and automate the embryo ranking and selection procedure by extracting relevant information from embryo microscopy images. The AI models are evaluated based on their ability to identify the embryo(s) with the highest chance(s) of achieving a successful pregnancy. Whether such evaluations should be based on ranking performance or pregnancy prediction, however, seems to divide studies. As such, a variety of performance metrics are reported, and comparisons between studies are often made on different outcomes and data foundations. Moreover, superiority of AI methods over manual human evaluation is often claimed based on retrospective data, without any mentions of potential bias. In this paper, we provide a technical view on some of the major topics that divide how current AI models are trained, evaluated and compared. We explain and discuss the most common evaluation metrics and relate them to the two separate evaluation objectives, ranking and prediction. We also discuss when and how to compare AI models across studies and explain in detail how a selection bias is inevitable when comparing AI models against current embryo selection practice in retrospective cohort studies.
Information on which weed species are present within agricultural fields is important for site specific weed management. This paper presents a method that is capable of recognising plant species in ...colour images by using a convolutional neural network. The network is built from scratch trained and tested on a total of 10,413 images containing 22 weed and crop species at early growth stages. These images originate from six different data sets, which have variations with respect to lighting, resolution, and soil type. This includes images taken under controlled conditions with regard to camera stabilisation and illumination, and images shot with hand-held mobile phones in fields with changing lighting conditions and different soil types. For these 22 species, the network is able to achieve a classification accuracy of 86.2%.
Display omitted
•A convolutional neural network is designed to determine the species of seedlings.•The system is trained and tested on images of 22 plant species.•The images are taken under a variate of different lightning and soil conditions.•In total 86.2% the plants were classified correctly.
Classifying the state of the atmosphere into a finite number of large-scale circulation regimes is a popular way of investigating teleconnections, the predictability of severe weather events, and ...climate change. Here, we investigate a supervised machine learning approach based on deformable convolutional neural networks (deCNNs) and transfer learning to forecast the North Atlantic-European weather regimes during extended boreal winter for 1-15 days into the future. We apply state-of-the-art interpretation techniques from the machine learning literature to attribute particular regions of interest or potential teleconnections relevant for any given weather cluster prediction or regime transition. We demonstrate superior forecasting performance relative to several classical meteorological benchmarks, as well as logistic regression and random forests. Due to its wider field of view, we also observe deCNN achieving considerably better performance than regular convolutional neural networks at lead times beyond 5-6 days. Finally, we find transfer learning to be of paramount importance, similar to previous data-driven atmospheric forecasting studies.
As pollinators, insects play a crucial role in ecosystem management and world food production. However, insect populations are declining, necessitating efficient insect monitoring methods. Existing ...methods analyze video or time-lapse images of insects in nature, but analysis is challenging as insects are small objects in complex and dynamic natural vegetation scenes. In this work, we provide a dataset of primarily honeybees visiting three different plant species during two months of the summer. The dataset consists of 107,387 annotated time-lapse images from multiple cameras, including 9423 annotated insects. We present a method for detecting insects in time-lapse RGB images, which consists of a two-step process. Firstly, the time-lapse RGB images are preprocessed to enhance insects in the images. This motion-informed enhancement technique uses motion and colors to enhance insects in images. Secondly, the enhanced images are subsequently fed into a convolutional neural network (CNN) object detector. The method improves on the deep learning object detectors You Only Look Once (YOLO) and faster region-based CNN (Faster R-CNN). Using motion-informed enhancement, the YOLO detector improves the average micro F1-score from 0.49 to 0.71, and the Faster R-CNN detector improves the average micro F1-score from 0.32 to 0.56. Our dataset and proposed method provide a step forward for automating the time-lapse camera monitoring of flying insects.
Blastocyst morphology is a predictive marker for implantation success of in vitro fertilized human embryos. Morphology grading is therefore commonly used to select the embryo with the highest ...implantation potential. One of the challenges, however, is that morphology grading can be highly subjective when performed manually by embryologists. Grading systems generally discretize a continuous scale of low to high score, resulting in floating and unclear boundaries between grading categories. Manual annotations therefore suffer from large inter-and intra-observer variances.
In this paper, we propose a method based on deep learning to automatically grade the morphological appearance of human blastocysts from time-lapse imaging. A convolutional neural network is trained to jointly predict inner cell mass (ICM) and trophectoderm (TE) grades from a single image frame, and a recurrent neural network is applied on top to incorporate temporal information of the expanding blastocysts from multiple frames.
Results showed that the method achieved above human-level accuracies when evaluated on majority votes from an independent test set labeled by multiple embryologists. Furthermore, when evaluating implantation rates for embryos grouped by morphology grades, human embryologists and our method had a similar correlation between predicted embryo quality and pregnancy outcome.
The proposed method has shown improved performance of predicting ICM and TE grades on human blastocysts when utilizing temporal information available with time-lapse imaging. The algorithm is considered at least on par with human embryologists on quality estimation, as it performed better than the average human embryologist at ICM and TE prediction and provided a slightly better correlation between predicted embryo quality and implantability than human embryologists.
•New approach to automate blastocyst morphology with time-lapse imaging.•Better accuracy of automated morphology grading compared to human embryologists.•New loss function outperforms common nominal and ordinal loss functions.
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) ...algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).
Forecasting the formation and development of clouds is a central element of modern weather forecasting systems. Incorrect cloud forecasts can lead to major uncertainty in the overall accuracy of ...weather forecasts due to their intrinsic role in the Earth's climate system. Few studies have tackled this challenging problem from a machine learning point-of-view due to a shortage of high-resolution datasets with many historical observations globally. In this article, we present a novel satellite-based dataset called "CloudCast." It consists of 70 080 images with 10 different cloud types for multiple layers of the atmosphere annotated on a pixel level. The spatial resolution of the dataset is 928 × 1530 pixels (3 × 3 km per pixel) with 15-min intervals between frames for the period January 1, 2017 to December 31, 2018. All frames are centered and projected over Europe. To supplement the dataset, we conduct an evaluation study with current state-of-the-art video prediction methods such as convolutional long short-term memory networks, generative adversarial networks, and optical flow-based extrapolation methods. As the evaluation of video prediction is difficult in practice, we aim for a thorough evaluation in the spatial and temporal domain. Our benchmark models show promising results but with ample room for improvement. This is the first publicly available global-scale dataset with high-resolution cloud types on a high temporal granularity to the authors' best knowledge.
Changes in regulations for livestock animals will in the near future call for loose-house pig breeding systems. These new systems will increase the workload for the farmers, as location and ...identification of animals will require more time than before. This paper presents a real-time computer vision system for tracking of pigs in loose-housed stables. The system will ease the workload for farmers in identification and locating individual animals. The system consists of a camera and a PC. The PC runs a tracking-algorithm, estimating the positions and identities of the pigs. The tracking algorithm operates in 2 steps. The first step builds up support maps, pointing to preliminary pig segments in each video frame. In the second step the support map segments are used to build up a 5D-Gaussian model of the individual pigs (i.e. position and shape). The system has software correction for fisheye distortion coming from the camera lens. The fisheye lens allows the camera to monitor a much larger area in the stable. The algorithms are developed in MatLab, implemented in C and runs in real-time. Experiments in the lab and in the stable demonstrate the robustness of the system. The system can track at least 3 pigs over a longer time span (more than 8min) without loosing track and identity of the individual pigs in a realistic experiment.
In agricultural mowing operations, thousands of animals are injured or killed each year, due to the increased working widths and speeds of agricultural machinery. Detection and recognition of ...wildlife within the agricultural fields is important to reduce wildlife mortality and, thereby, promote wildlife-friendly farming. The work presented in this paper contributes to the automated detection and classification of animals in thermal imaging. The methods and results are based on top-view images taken manually from a lift to motivate work towards unmanned aerial vehicle-based detection and recognition. Hot objects are detected based on a threshold dynamically adjusted to each frame. For the classification of animals, we propose a novel thermal feature extraction algorithm. For each detected object, a thermal signature is calculated using morphological operations. The thermal signature describes heat characteristics of objects and is partly invariant to translation, rotation, scale and posture. The discrete cosine transform (DCT) is used to parameterize the thermal signature and, thereby, calculate a feature vector, which is used for subsequent classification. Using a k-nearest-neighbor (kNN) classifier, animals are discriminated from non-animals with a balanced classification accuracy of 84.7% in an altitude range of 3-10 m and an accuracy of 75.2% for an altitude range of 10-20 m. To incorporate temporal information in the classification, a tracking algorithm is proposed. Using temporal information improves the balanced classification accuracy to 93.3% in an altitude range 3-10 of meters and 77.7% in an altitude range of 10-20 m.
In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass ...mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.