Although communication capabilities are displayed by many vertebrate groups, some repertoires are poorly known, such as the case of xenarthrans, particularly armadillos, for which vocalization as a ...source of communicating to others remains poorly understood and relies on punctual reports of sounds. Here we provide the first description of a behavioral response associated with sound emission of two subjects of Dasypus novemcinctus. Both audio and visual registration was performed to subsequent analyses of expressed behaviors and emitted calls, which accounted for 76 vocalizations from a total of eight video recordings randomly collected from 2017 to 2019. Sound is acoustically characterized by both inhale and exhale phases composed of two vocal units, and no harmonic structure was observed. Once the subjects have always produced these vocalizations while cornered and exhibiting defensive behavior against another subject/human disturbance, these vocalizations were termed as distress. Subjects produced a hiss-purr-like sound while trying to avoid contact with another by bowing or lowering their bodies, humping, or even moving elsewhere when sound production ceased. This shows that the sound repertoire of armadillos is still to be unveiled and seems to be much more complex than previously thought.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
Autonomous sound recording techniques have gained considerable traction in the last decade, but the question remains whether they can replace human observation surveys to sample sonant animals. For ...birds in particular, survey methods have been tested extensively using point counts and sound recording surveys. Here, we review the latest evidence for this taxon within the frame of a systematic map. We compare sampling effectiveness of these two survey methods, the output variables they produce, and their practicality. When assessed against the standard of point counts, autonomous sound recording proves to be a powerful tool that samples at least as many species. This technology can monitor birds in an exhaustive, standardized, and verifiable way. Moreover, sound recorders give access to entire soundscapes from which new data types can be derived (vocal activity, acoustic indices). Variables such as abundance, density, occupancy, or species richness can be obtained to yield data sets that are comparable to and compatible with point counts. Finally, autonomous sound recorders allow investigations at high temporal and spatial resolution and coverage, which are more cost effective and cannot be achieved by human observations alone, even though small-scale studies might be more cost effective when carried out with point counts. Sound recorders can be deployed in many places, they are more scalable and reliable, making them the better choice for bird surveys in an increasingly data-driven time. We provide an overview of currently available recorders and discuss their specifications to guide future study designs.
The aim of the present study was to evaluate the acoustic activity of Litopenaeus vannamei of different size classes during feeding in captivity, as well as describe the sound generation mechanism ...and main associated acoustic variables. The structure responsible for sound emission was identified based on simultaneous audio and video recordings during the consumption of feed pellets. Eighteen animals divided into three size classes (small: 13.03 ± 1.87 g; medium: 22.09 ± 2.20; large: 35.31 ± 3.20 g) were used for the acoustic characterization of feeding activity. The animals were fed three pellets (48 ± 4 mg) individually and offered in sequence. The recording of each pellet offered lasted 10 min, beginning with the point at which the animal took the pellet. The number of sound pulses (“clicking” sound) per pellet ingested was counted and related to food intake. L. vannamei emits sound during the feeding process, which is associated with the closing of the mandibles during the shredding of the food. The average values for the acoustic variables were a minimum frequency of 3.47 ± 0.32 kHz, maximum frequency of 37.75 ± 2.44 kHz, frequency peak of 11.1 ± 3.39 kHz, maximum energy of 83.55 ± 3.39 dB and sound duration of 4.7 ± 0.2 ms. No statistically significant differences in the acoustic variables were found among the different size classes or in the sequence of the pellets offered. The number of clicks per pellet ranged from 121 to 154 for all size classes. However, the number of clicks generated in the large class was significantly higher during the first minute after the capture of the pellets, dropping significantly after five minutes in comparison to the other size classes. The findings demonstrate that L. vannamei is acoustically active and the sounds generated can be used as an indication of feeding activity in captivity. The click rate per pellet or particular period of time, combined with the maximum energy generated at a specific frequency (frequency peak) can be used as an indication of the quantity of feed consumed by the animals.
•The Pacific white shrimp L. vannamei is acoustically active and emits sound (clicks) during the process of feeding.•The click rate per pellet or time, and the energy at a specific frequency can be related to feeding activity of L. vannamei.•The acoustic parameters evaluated are useful to apply Passive Acoustic Monitoring (PAM) in feeding management of L. vannamei.
We present a high-resolution, densely sampled data set of wild bird songs collected over multiple years from a single population of great tits, Parus major, in the U.K. The data set includes over ...1100000 individual acoustic units from 109963 richly annotated songs, sung by more than 400 individual birds, and provides unprecedented detail on the vocal behaviour of wild birds. Here, we describe the data collection and processing procedures and provide a summary of the data. We also discuss potential research questions that can be addressed using this data set, including behavioural repeatability and stability, links between vocal performance and reproductive success, the timing of song production, syntactic organization of song production and song learning in the wild. We have made the data set and associated software tools publicly available with the aim that other researchers can benefit from this resource and use it to further our understanding of bird vocal behaviour in the wild.
•We present a large data set of songs from a wild population of great tits.•It contains 1.1M acoustic units from 109K songs recorded over several years.•It includes extensive metadata and annotations.•The data set and associated software tools are publicly available.
The source-filter theory suggests that animal traits, such as body size, are reliably encoded in vocalizations. These vocal signals, with a likely precopulatory function, are thought to be costly; ...given energetic constraints, they are expected to be in a trade-off with postcopulatory traits, such as testicular volume. Although this trade-off has been generally tested through comparative studies across species, it remains understudied whether it holds within a single species. Using parallel-laser photogrammetry, we conducted a 9-month study at Palenque National Park, Mexico, to investigate whether fundamental frequency (F0) or formant dispersion (ΔF) of roars and barks from 14 male black howler monkeys encode cues of body size, and whether they are in a trade-off with testicular volume. We found that only roar ΔF was associated with body size, with larger males producing roars with lower ΔF, suggesting a likely use of roars in male–male competition or female mate choice in black howlers. In contrast, after accounting for the positive effect of body size on testicular volume, no association was found between these vocal features and testicular volume. Our results show the presence of acoustic allometry within roars of male black howlers and suggest the absence of a trade-off within a single species, despite its presence at the genus level.
•Roar formant dispersion was an honest signal of body size in adult male black howlers.•Formant dispersion, not fundamental frequency, correlated negatively with body size.•Barks did not contain information about adult male body size.•Testicular volume was not correlated with acoustic features of roars or barks.•Vocal–testicular trade-off reported across howler species was absent in black howlers.
Based on field surveys undertaken in two conservation areas, we report new distribution data of Hyalinobatrachium taylori (Goin, 1968) and H. tricolor Castroviejo-Fisher, Vilà, ...Ayarzagüena, Blanc & Ernst, 2011 from the state of Amapá, northern Brazil. We provide acoustic data from these new populations. These are the first records of H. taylori and H. tricolor from Amapá, extending the geographic distributions of these species by 317 km from Mitaraka and 320 km from Saut Grand Machicou, both in French Guiana, respectively.
► We address methodological problems in Cardoso & Atwell (2011, Animal Behaviour, 82, 831–836). ► We illustrate common problems with taking acoustic measurements from spectrograms. ► Reliable ...amplitude measurements require calibrated and controlled recordings. ► We explain the interrelationship of frequency and amplitude in animal vocalizations. ► Bioacoustic studies should consider basic principles of acoustics and proper analysis tools.
•Apply Machine Learning techniques to perform classification of dolphin vocalizations.•Study the impact of the model parameters on classification accuracy.•General framework applied to analysis of ...other marine mammals.
Bioacoustics allows researchers to study animals through their vocalizations with non-invasive methods. The analysis of the recordings is a difficult task that is best handled by machine learning methods. Hidden Markov Models (HMMs), which are machine learning methods in human speech processing, were developed and implemented for the discrimination of 11 genera and 43 species of the New World Warblers (family Parulidae). Based on the CLO-43SD database, the fundamental goal of the experiments was to determine the classification accuracy on the specific genus and species of birds. Through Mel-Frequency Cepstral Coefficients (MFCCs) along with log energy and time derivative features extracted from the vocalizations, HMMs containing 2 states with single underlying Gaussian Mixture Models (GMMs) generated classification accuracies 91.55% across 11 genera of birds and 63.92% for 43 species of birds. From the results, the framework could be applied to analysis of other birds for both classification and detection of vocalizations.
Stressed plants show altered phenotypes, including changes in color, smell, and shape. Yet, airborne sounds emitted by stressed plants have not been investigated before. Here we show that stressed ...plants emit airborne sounds that can be recorded from a distance and classified. We recorded ultrasonic sounds emitted by tomato and tobacco plants inside an acoustic chamber, and in a greenhouse, while monitoring the plant’s physiological parameters. We developed machine learning models that succeeded in identifying the condition of the plants, including dehydration level and injury, based solely on the emitted sounds. These informative sounds may also be detectable by other organisms. This work opens avenues for understanding plants and their interactions with the environment and may have significant impact on agriculture.
Display omitted
•Plants emit ultrasonic airborne sounds when stressed•The emitted sounds reveal plant type and condition•Plant sounds can be detected and interpreted in a greenhouse setting
Plants emit species- and stress-specific airborne sounds that can be detected in acoustic chambers and greenhouses.
Animal vocalisations and natural soundscapes are fascinating objects of study, and contain valuable evidence about animal behaviours, populations and ecosystems. They are studied in bioacoustics and ...ecoacoustics, with signal processing and analysis an important component. Computational bioacoustics has accelerated in recent decades due to the growth of affordable digital sound recording devices, and to huge progress in informatics such as big data, signal processing and machine learning. Methods are inherited from the wider field of deep learning, including speech and image processing. However, the tasks, demands and data characteristics are often different from those addressed in speech or music analysis. There remain unsolved problems, and tasks for which evidence is surely present in many acoustic signals, but not yet realised. In this paper I perform a review of the state of the art in deep learning for computational bioacoustics, aiming to clarify key concepts and identify and analyse knowledge gaps. Based on this, I offer a subjective but principled roadmap for computational bioacoustics with deep learning: topics that the community should aim to address, in order to make the most of future developments in AI and informatics, and to use audio data in answering zoological and ecological questions.