Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SBCE, SBMB, UL, UM, UPUK
2.
How does a lizard shed its tail? Ghatak, Animangsu
Science (American Association for the Advancement of Science),
02/2022, Volume:
375, Issue:
6582
Journal Article
Peer reviewed
Hierarchical microstructures help a lizard self-amputate its tail when needed.
Biomedicine, the predominant medical model that emerged during the twentieth century, is founded conceptually on mechanism and reductionism, especially in terms of portraying the patient as a machine ...reducible to its component parts. Systems medicine, in contrast, has emerged during the early part of the twenty-first century to address problems arising from biomedicine’s failure to cure diseases such as cancer. In this paper, a conceptual framework is provided for shifting from mechanistic biomedicine to organismal systems medicine. Specifically, organicism and holism provide the necessary foundation for viewing the patient not simply as a diseased or dysfunctional body part but as a whole person embedded within a biological, psychological, social, and environmental framework. Although biomedicine’s approach has identified many of the physiological and pathological components of health and disease, a shift to organismal systems medicine promises to deliver the principles and rules by which these components relate and interact with one another in a holistic rather than simply in a reductive mechanistic fashion.
Pedestrian re-identification is a difficult problem due to the large variations in a person's appearance caused by different poses and viewpoints, illumination changes, and occlusions. Spatial ...alignment is commonly used to address these issues by treating the appearance of different body parts independently. However, a body part can also appear differently during different phases of an action. In this paper we consider the temporal alignment problem, in addition to the spatial one, and propose a new approach that takes the video of a walking person as input and builds a spatio-temporal appearance representation for pedestrian re-identification. Particularly, given a video sequence we exploit the periodicity exhibited by a walking person to generate a spatio-temporal body-action model, which consists of a series of body-action units corresponding to certain action primitives of certain body parts. Fisher vectors are learned and extracted from individual body-action units and concatenated into the final representation of the walking person. Unlike previous spatio-temporal features that only take into account local dynamic appearance information, our representation aligns the spatio-temporal appearance of a pedestrian globally. Extensive experiments on public datasets show the effectiveness of our approach compared with the state of the art.
Recognizing actions from still images has been popularly studied recently. In this paper, we model an action class as a flexible number of spatial configurations of body parts by proposing a new ...spatial sum-product network (SPN). First, we discover a set of parts in image collections via unsupervised learning. Then, our new spatial SPN is applied to model the spatial relationship and also the high-order correlations of parts. To learn robust networks, we further develop a hierarchical spatial SPN method, which models pairwise spatial relationship between parts inside subimages and models the correlation of subimages via extra layers of SPN. Our method is shown to be effective on two benchmark data sets.
Tissue clearing technique enables visualization of opaque organs and tissues in 3-dimensions (3-D) by turning tissue transparent. Current tissue clearing methods are restricted by limited types of ...tissues that can be cleared with each individual protocol, which inevitably led to the presence of blind-spots within whole body or body parts imaging. Hard tissues including bones and teeth are still the most difficult organs to be cleared. In addition, loss of endogenous fluorescence remains a major concern for solvent-based clearing methods. Here, we developed a polyethylene glycol (PEG)-associated solvent system (PEGASOS), which rendered nearly all types of tissues transparent and preserved endogenous fluorescence. Bones and teeth could be turned nearly invisible after clearing. The PEGASOS method turned the whole adult mouse body transparent and we were able to image an adult mouse head composed of bones, teeth, brain, muscles, and other tissues with no blind areas. Hard tissue transparency enabled us to reconstruct intact mandible, teeth, femur, or knee joint in 3-D. In addition, we managed to image intact mouse brain at sub-cellular resolution and to trace individual neurons and axons over a long distance. We also visualized dorsal root ganglions directly through vertebrae. Finally, we revealed the distribution pattern of neural network in 3-D within the marrow space of long bone. These results suggest that the PEGASOS method is a useful tool for general biomedical research.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting ...particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy.
Learning discriminative representations for unseen person images is critical for person re-identification (ReID). Most of the current approaches learn deep representations in classification tasks, ...which essentially minimize the empirical classification risk on the training set. As shown in our experiments, such representations easily get over-fitted on a discriminative human body part on the training set. To gain the discriminative power on unseen person images, we propose a deep representation learning procedure named part loss network, to minimize both the empirical classification risk on training person images and the representation learning risk on unseen person images. The representation learning risk is evaluated by the proposed part loss, which automatically detects human body parts and computes the person classification loss on each part separately. Compared with traditional global classification loss, simultaneously considering part loss enforces the deep network to learn representations for different body parts and gain the discriminative power on unseen persons. Experimental results on three person ReID datasets, i.e., Market1501, CUHK03 and VIPeR, show that our representation outperforms existing deep representations.
Sign Language Recognition: A Deep Survey Rastgoo, Razieh; Kiani, Kourosh; Escalera, Sergio
Expert systems with applications,
February 2021, 2021-02-00, 20210201, Volume:
164
Journal Article
Peer reviewed
Sign language, as a different form of the communication language, is important to large groups of people in society. There are different signs in each sign language with variability in hand shape, ...motion profile, and position of the hand, face, and body parts contributing to each sign. So, visual sign language recognition is a complex research area in computer vision. Many models have been proposed by different researchers with significant improvement by deep learning approaches in recent years. In this survey, we review the vision-based proposed models of sign language recognition using deep learning approaches from the last five years. While the overall trend of the proposed models indicates a significant improvement in recognition accuracy in sign language recognition, there are some challenges yet that need to be solved. We present a taxonomy to categorize the proposed models for isolated and continuous sign language recognition, discussing applications, datasets, hybrid models, complexity, and future lines of research in the field.
•We perform a comprehensive review of recent works for sign language recognition.•We define a taxonomy to group existing works and discuss on their pros and cons.•We discuss on features, modalities, evaluation metrics, applications, and datasets.•Different challenges and future lines of research in the field are presented.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP