Automatic analysis and the recognition and prediction of the behaviour of large-scale crowds in video-surveillance data is a research field of paramount importance for the security of modern ...societies. It serves to predict and help prevent disasters in public places where crowds of people gather. The paper proposes a novel method for generating meta-tracklets and recognition of dominant motion patterns as a basis for automatic crowd behaviour analysis at the macroscopic level, where a crowd is treated as an entity. The basic characteristic of macroscopic crowd scenes is that it is impossible to detect and track individuals in the scene. The idea of the method proposed in this paper is to recognize dominant crowd motion patterns, by avoiding time-consuming and error-sensitive crowd segmentation, crowd tracking and detection of regions of interest. Thus, the process of determining dominant motion patterns and recognizing crowd behaviour is accelerated. The method is inspired by a quantum mechanical approach. It combines a set of particles, which are considered as particles in quantum mechanics, tracklets of particles’ advection in a video clip, and the interaction of wave functions spread out from particle positions. A wave function is expressed in the form of an asymmetric potential function. Peaks of the wave field define the most probable particle flow, which defines a meta-tracklet. Dominant motion patterns are recognized by applying the functions of fuzzy predicates, which represent a combination of common-sense and human expert knowledge about crowd motions, to the meta-tracklets. The experimental results of the proposed method are presented for a subset of UCF dataset and AGORASET crowd simulation videos and have shown promising results in dominant motion pattern recognition.
People enjoy spending time in the wilderness for numerous reasons. However, they occasionally get lost or injured, and their survival depends on being efficiently found and rescued in the shortest ...possible time. A search and rescue operation (SAR) is launched after the accident is reported, and all possible resources are activated. The inclusion of drones in SAR operations has enabled the use of computer vision methods to detect persons in aerial imagery automatically. When searching by drone, preference is given to oblique photographs that cover a larger area within a single image, reducing the search time. Unlike vertical photographs, oblique photographs include a significant scale change, making it challenging to locate a person in the real world and determine their distance from the drone. In order to solve this problem, encouraged by our previous successful simulations, we explored the possibility of applying the raycast method for person geolocalization and distance determination for use in real-world scenarios. In this paper, we propose a system able to precisely geolocate persons automatically detected in offline processed images recorded during the SAR mission. After a series of experiments on terrains of different configurations and complexity, using a custom-made 3D terrain generator and raycaster, along with a deep neural network-based person detector trained on our custom dataset, we defined a method for geolocating detected person based on raycast, which allows using low-cost commercial drones with a monocular camera and no Real-Time Kinematic module while enabling laser rangefinder emulation during offline image analysis. Our person geolocating method overcomes the problems faced by previous methods and, using a single flight sequence with only 4 consecutive detections, significantly outperforms the previous best results, with reliability of 42,85% (geolocating error of 0.7 m on recording from a 30 m height). Also, a short time of only 247 s enables offline processing of data recorded during a 21-minute drone flight covering approximately an area of 10 ha, proving that the proposed method can be effectively used in actual SAR missions. We also proposed a new evaluation metric (ErrDist) for person geolocalization and provided recommendations for using the proposed system for person detection and geolocation in real-world scenarios.
Teratomas are tumors derived from germ cells, most frequently arising in the gonads. The aim of this study was to determine the number of ovarian teratomas diagnosed in the routine biopsy material at ...Ljudevit Jurak Clinical Department of Pathology, Sestre milosrdnice University Hospital Center during a 5-year period, as well as their clinical, gross and microscopic characteristics. Teratomas accounted for 48.6% (n=166) of primary ovarian tumors. The patient mean age was 34.74±12.37 years. Difference in the incidence of teratoma between the left and right ovary was not significant; bilateral teratoma was found in 13 patients. Teratomas were detected by ultrasonography in 115 (69.27%) cases and the rest were found during surgery performed for other indications. Most teratomas (n=161; 96.9%) were mature and cystic (dermoid cysts). Mature and solid teratomas were diagnosed in 5 (3.01%), ovarian struma in 2 (1.8%) cases and strumal carcinoid in 1 (1.2%) case. Mature cystic teratomas contained sebaceous material in 123 (76.8%) cases, and a total of 16 teeth were found; 157 (94.5%) teratomas measured <10 cm in largest diameter. Microscopically, mature cystic teratomas most frequently contained ectodermal (skin with appendages, mature glia and nerve ganglia) and mesodermal (fi brous, fat tissue, cartilage and bone) tissues. Frequently found tissues of endodermal origin were respiratory and intestinal epithelia. Small foci of thyroid tissue were found in 20 (12%) teratomas. Chronic granulomatous foreign body reaction in the wall of mature cystic teratomas was found in 11 (6.8%) tumors.
Human Detection in Thermal Imaging Using YOLO Ivašić-Kos, Marina; Krišto, Mate; Pobar, Miran
Proceedings of the 2019 5th International Conference on Computer and Technology Applications,
04/2019
Conference Proceeding
In this paper, we consider the problem of automatic detection of humans in thermal videos and images. The thermal videos are recorded on a meadow with a small forest with up to three persons present ...on the scene at different positions and ranges from the camera. To simulate realistic conditions that can happen during surveillance and monitoring of protected areas, all videos are recorded at night but different weather conditions--clear weather, rain, and fog. We present the results of human detection on a custom dataset of thermal videos using the out-of-the-box YOLO convolutional neural network and the YOLO network trained on a subset of our dataset. YOLO is an object detector pretrained on the COCO image dataset of RGB images of various object classes. Test experimental results have shown significantly improved performance of human detection in thermal imaging in terms of average precision for trained YOLO model over the original model.
We present a patient with trisomy 18 syndrome and bilateral Wilms' tumor representing the second case of the literature. Physicians should remain alert to the possibility of WT in patients with ...trisomy 18 who may survive beyond infancy. In this event, it may be essential to consider periodic abdominal ultrasound for screening purposes. A critical review of the literature is presented.
Automatic image annotation involves automatically assigning useful keywords to an unlabelled image. The major goal is to bridge the so-called semantic gap between the available image features and the ...keywords that people might use to annotate images. Although different people will most likely use different words to annotate the same image, most people can use object or scene labels when searching for images.
We propose a two-tier annotation model where the first tier corresponds to object-level and the second tier to scene-level annotation. In the first tier, images are annotated with labels of objects present in them, using multi-label classification methods on low-level features extracted from images. Scene-level annotation is performed in the second tier, using the originally developed inference-based algorithms for annotation refinement and for scene recognition. These algorithms use a fuzzy knowledge representation scheme based on Fuzzy Petri Net, KRFPNs, that is defined to enable reasoning with concepts useful for image annotation. To define the elements of the KRFPNs scheme, novel data-driven algorithms for acquisition of fuzzy knowledge are proposed.
The proposed image annotation model is evaluated separately on the first and on the second tier using a dataset of outdoor images. The results outperform the published results obtained on the same image collection, both on the object-level and on scene-level annotation. Different subsets of features composed of dominant colours, image moments, and GIST descriptors, as well as different classification methods (RAKEL, ML-kNN and Naïve Bayes), were tested in the first tier. The results of scene level annotation in the second tier are also compared with a common classification method (Naïve Bayes) and have shown superior performance. The proposed model enables the expanding of image annotation with new concepts regardless of their level of abstraction.
•Multi-label classification and knowledge-based approach to image annotation.•The definition of the fuzzy knowledge representation scheme based on FPN.•Novel data-driven algorithms for automatic acquisition of fuzzy knowledge.•Novel inference based algorithms for annotation refinement and scene recognition.•A comparison of inference-based scene classification with an ordinary approach.
Prikazani su mogući načini nastanka blizanačke trudnoće, kao i načini na koje dolazi do razvoja posteljice u ljudi u dva oblika blizanačke trudnoće: dvojajčane i jednojajčane. Posebni naglasak je ...stavljen na komplikacije blizanačkih trudnoća do kojih dolazi zbog načina placentacije. Kod blizanačkih trudnoća kod kojih se razvijaju dvije korionske ploče i dvije amnijske šupljine (odvojene ili srasle) komplikacije su uglavnom jednake kao u jednoplodnih trudnoća uz povećanu učestalost velamentozne insercije jedne ili obaju pupkovina, kao i razvoja samo jedne pupčane arterije, običnu u samo jednog blizanca. Kod jednojajčanih blizanaca sa samo jednom korionskom pločom i jednom zajedničkom amnijskom šupljinom najteža komplikacija je intrauterina smrt blizanaca do koje najčešće dolazi u drugom tromjesečju trudnoće uslijed komplikacija vezanih za zaplitanje pupkovina. Jednojačani blizanci čija je posteljica građena od zajedničke korionske ploče i dvije amnijske šupljine u posebnoj su opasnosti od razvoja sindroma feto-fetalne transfuzije (eng. twin twin transfusion syndrome -TTTS) uslijed postojanja krvožilnih anastomoza. Opisan je način patološke makroskopske i patohistološke analize blizanačkih posteljica, način dokazivanja krvnožilnih anastomoza, kao i obdukcijski nalazi u slučajevima intrauterine smrti jednog ili oba blizanca. Opisana je i komplikacija blizanačke trudnoća koja se naziva eng. twin reversed arterial perfusion – TRAP, pretpostavljeni način njezina nastanka i morfološki nalaz u blizanca u kojega se ta patološka promjena razvila.
Velika količina podataka koja se svaki dan kreira može se upotrijebiti za razvoj algoritama umjetne inteligencije u domeni računalnog vida koji rješavaju zadatke poput klasifikacije slika, detekcije ...osoba i raspoznavanja akcija. Ti skupovi podataka su najčešće izrađeni od videozapisa i slika preuzetih s televizijskih kanala ili s društvene mreže YouTube i prikupljeni su i pripremljeni za odgovarajući zadatak. Nas je zanimao zadatak detekcije plivača, kako bi se model mogao koristiti za raspoznavanje i unaprjeđenje plivačkih tehnika. Iako danas postoje ogromne otvorene baze slika poput COCO i ImageNet, pripremljene za nadzirano strojno učenje te baze sportskih scena poput Olympic Sports Dataset, UCF Action Sport dataset ili Sport-1M koje uključuju slike popularnijih (gledanijih) sportova, nijedna od njih ne uključuje slike koje bi se mogle koristiti za izradu našeg modela za detekciju plivača. Stoga je u ovom radu opisan postupak snimanja i prikupljanja video materijala te priprema skupa slika UNIRI-SWM za detekciju plivača. Skup uključuje snimke plivača u realnim, situacijskim uvjetima treninga i natjecanja snimljenih akcijskim kamerama iz različitih kutova snimanja. U radu su dani rezultati detekcije plivača korištenjem dubokih konvolucijskih neuronskih mreža Mask R-CNN i Yolov3, naučenim na skupu općih slika prije i nakon učenja na skupu UNIRI-SWM. Rezultati pokazuju da se nakon prilagodbe modela na odgovarajućem skupu slika iz domene plivanja mogu postići jako dobri rezultati detekcije plivača.
The large amount of data that is created every day can be used to develop artificial intelligencealgorithms in the domain of computer vision that solve tasks such as image classification, facedetection and action recognition. These datasets are most often created from videos and imagesdownloaded from television channels or the YouTube social network and are collected and preparedfor the appropriate task. We were interested in the task of detecting swimmers, so that the modelcould be used to recognize and improve swimming techniques. Although today there are huge openimage databases like COCO and ImageNet, prepared for supervised machine learning and sportsscene databases like Olympic Sports Dataset, UCF Action Sport dataset or Sport-1M that includeimages of more popular (watched) sports, none of them include images that could be used to makeour swimmer detection model. Therefore, this paper describes the process of recording and collectingvideo material and preparing a set of UNIRI-SWM images for swimmer detection. The set includesshots of swimmers in real, situational training and competition conditions filmed by action camerasfrom different shooting angles. The paper presents the results of swimmer detection using deepconvolutional neural networks Mask R-CNN and Yolo v3, learned in the set of general images beforeand after learning in the set UNIRI-SWM. The results show that after adjusting the model on theappropriate set of images from the swimming domain, very good results of swimmer detection canbe achieved.