The reemergence of Deep Neural Networks (DNNs) has lead to high-performance supervised learning algorithms for the Electro-Optical (EO) domain classification and detection problems. This success is ...because generating huge labeled datasets has become possible using modern crowdsourcing labeling platforms such as Amazon’s Mechanical Turk that recruit ordinary people to label data. Unlike the EO domain, labeling the Synthetic Aperture Radar (SAR) domain data can be much more challenging, and for various reasons, using crowdsourcing platforms is not feasible for labeling the SAR domain data. As a result, training deep networks using supervised learning is more challenging in the SAR domain. In the paper, we present a new framework to train a deep neural network for classifying Synthetic Aperture Radar (SAR) images by eliminating the need for a huge labeled dataset. Our idea is based on transferring knowledge from a related EO domain problem, where labeled data are easy to obtain. We transfer knowledge from the EO domain through learning a shared invariant cross-domain embedding space that is also discriminative for classification. To this end, we train two deep encoders that are coupled through their last year to map data points from the EO and the SAR domains to the shared embedding space such that the distance between the distributions of the two domains is minimized in the latent embedding space. We use the Sliced Wasserstein Distance (SWD) to measure and minimize the distance between these two distributions and use a limited number of SAR label data points to match the distributions class-conditionally. As a result of this training procedure, a classifier trained from the embedding space to the label space using mostly the EO data would generalize well on the SAR domain. We provide a theoretical analysis to demonstrate why our approach is effective and validate our algorithm on the problem of ship classification in the SAR domain by comparing against several other competing learning approaches.
The ultimate visual journey into the beautiful and
complex world of wasps Wasps are far more diverse than the
familiar yellowjackets and hornets that harass picnickers and build
nests under the eaves ...of our homes. These amazing, mostly solitary
creatures thrive in nearly every habitat on Earth, and their
influence on our lives is overwhelmingly beneficial. Wasps are
agents of pest control in agriculture and gardens. They are
subjects of study in medicine, engineering, and other important
fields. Wasps pollinate flowers, engage in symbiotic relationships
with other organisms, and create architectural masterpieces in the
form of their nests. This richly illustrated book introduces you to
some of the most spectacular members of the wasp realm, colorful in
both appearance and lifestyle. From minute fairyflies to gargantuan
tarantula hawks, wasps exploit almost every niche on the planet. So
successful are they at survival that other organisms emulate their
appearance and behavior. The sting is the least reason to respect
wasps and, as you will see, no reason to loathe them, either.
Written by a leading authority on these remarkable insects,
Wasps reveals a world of staggering variety and endless
fascination.
Packed with more than 150 incredible color photos
Includes a wealth of eye-popping infographics
Provides comprehensive treatments of most wasp families
Describes wasp species from all corners of the world
Covers wasp evolution, ecology, physiology, diversity, and
behavior
Highlights the positive relationships wasps share with humans
and the environment
Zero-shot learning (ZSL) is a framework to classify images that belong to unseen visual classes using their semantic descriptions about the unseen classes. We develop a new ZSL algorithm based on ...coupled dictionary learning. The core idea is to enforce the visual features and the semantic attributes of an image to share the same sparse representation in an intermediate embedding space, modeled as the shared input space of two sparsifying dictionaries. In the ZSL training stage, we use images from a number of seen classes for which we have access to both the visual and the semantic attributes to train two coupled dictionaries that can represent both the visual and the semantic feature vectors of an image using a single sparse vector. In the ZSL testing stage and in the absence of labeled data, images from unseen classes are mapped into the attribute space by finding the joint-sparse representations using solely the visual dictionary via solving a LASSO problem. The image is then classified in the attribute space given semantic descriptions of unseen classes. We also provide attribute-aware and transductive formulations to tackle the “domain-shift” and the “hubness” challenges for ZSL, respectively. Experiments on four primary datasets using VGG19 and GoogleNet visual features, are provided. Our performances using VGG19 features are 91.0%, 48.4%, and 89.3% on the SUN, the CUB, and the AwA1 datasets, respectively. Our performances on the SUN, the CUB, and the AwA2 datasets are 57.0%,49.7%, and 71.7%, respectively, when GoogleNet features are used. Comparison with existing methods demonstrates that our method is effective and compares favorably against the state-of-the-art. In particular, our algorithm leads to decent performance on the all four datasets.22Early partial results of this paper is presented at 2018 AAAI (Kolouri, Rostami, Owechko, & Kim, 2018).
Different subfields of AI (such as vision, learning, reasoning, and planning) are often studied in isolation, both in individual courses and in the research literature. This promulgates the idea that ...these different AI capabilities can easily be integrated later, whereas, in practice, developing integrated AI systems remains an open challenge for both research and industry. Interdisciplinary project‐driven courses can fill this gap in AI education, providing challenging problems that require the integration of multiple AI methods. This article explores teaching integrated AI through two project‐driven courses: a capstone‐style graduate course in advanced robotics, and an undergraduate course on computational sustainability and assistive computing. In addition to studying the integration of AI techniques, these courses provide students with practical applications experience and exposure to social issues of AI and computing. My hope is that other instructors find these courses as useful examples for constructing their own project‐driven courses to teach integrated AI.
Knowledge transfer between tasks can improve the performance of learned models, but requires an accurate estimate of inter-task relationships to identify the relevant knowledge to transfer. These ...inter-task relationships are typically estimated based on training data for each task, which is inefficient in lifelong learning settings where the goal is to learn each consecutive task rapidly from as little data as possible. To reduce this burden, we develop a lifelong learning method based on coupled dictionary learning that utilizes high-level task descriptions to model inter-task relationships. We show that using task descriptors improves the performance of the learned task policies, providing both theoretical justification for the benefit and empirical demonstration of the improvement across a variety of learning problems. Given only the descriptor for a new task, the lifelong learner is also able to accurately predict a model for the new task through zero-shot learning using the coupled dictionary, eliminating the need to gather training data before addressing the task.
A method of fast multi-objective optimization and decision-making support for building retrofit planning is developed, and lifecycle cost analysis method taking into account of future climate ...condition is used in evaluating the retrofit performance. In order to resolve the optimization problem in a fast manner with recourse to non-dominate sorting differential evolution algorithm, the simplified hourly dynamic simulation modeling tool SimBldPy is used as the simulator for objective function evaluation. Moreover, the generated non-dominated solutions are treated and rendered by a layered scheme using agglomerative hierarchical clustering technique to make it more intuitive and sense making during the decision-making process as well as to be better presented.
The suggested optimization method is implemented to the retrofit planning of a campus building in UPenn with various energy conservation measures (ECM) and costs, and more than one thousand Pareto fronts are obtained and being analyzed according to the proposed decision-making framework. Twenty ECM combinations are eventually selected from all generated Pareto fronts. It is manifested that the developed decision-making support scheme shows robustness in dealing with retrofit optimization problem and is able to provide support for brainstorming and enumerating various possibilities during the decision-making process.
•Lifecycle cost analysis considering future climate change impact on building retrofit.•Use paralleled NSDE and selected best crossover operator for fast optimization.•Use specifically developed energy simulator, SimBldPy, to accelerate the optimization.•Developed a decision making support scheme based on hierarchical clustering.
Bird's Eye View (BEV) is a popular representation for processing 3D point clouds, and by its nature is fundamentally sparse. Motivated by the computational limitations of mobile robot platforms, we ...create a fast, high-performance BEV 3D object detector that maintains and exploits this input sparsity to decrease runtimes over non-sparse baselines and avoids the tradeoff between pseudoimage area and runtime. We present results on KITTI, a canonical 3D detection dataset, and Matterport-Chair, a novel Matterport3D-derived chair detection dataset from scenes in real furnished homes. We evaluate runtime characteristics using a desktop GPU, an embedded ML accelerator, and a robot CPU, demonstrating that our method results in significant detection speedups (2 × or more) for embedded systems with only a modest decrease in detection quality. Our work represents a new approach for practitioners to optimize models for embedded systems by maintaining and exploiting input sparsity throughout their entire pipeline to reduce runtime and resource usage while preserving detection performance. All models, weights, experimental configurations, and datasets used are publicly available 1 1 https://vedder.io/sparse_point_pillars.
Despite the impressive results of recent artificial intelligence applications to general ophthalmology, comparatively less progress has been made toward solving problems in pediatric ophthalmology ...using similar techniques. This article discusses the unique needs of pediatric patients and how artificial intelligence techniques can address these challenges, surveys recent applications to pediatric ophthalmology, and discusses future directions.
The most significant advances involve the automated detection of retinopathy of prematurity, yielding results that rival experts. Machine learning has also been applied to the classification of pediatric cataracts, prediction of postoperative complications following cataract surgery, detection of strabismus and refractive error, prediction of future high myopia, and diagnosis of reading disability. In addition, machine learning techniques have been used for the study of visual development, vessel segmentation in pediatric fundus images, and ophthalmic image synthesis.
Artificial intelligence applications could significantly benefit clinical care by optimizing disease detection and grading, broadening access to care, furthering scientific discovery, and improving clinical efficiency. These methods need to match or surpass physician performance in clinical trials before deployment with patients. Owing to the widespread use of closed-access data sets and software implementations, it is difficult to directly compare the performance of these approaches, and reproducibility is poor. Open-access data sets and software could alleviate these issues and encourage further applications to pediatric ophthalmology.