Many existing Monte Carlo methods rely on multiple importance sampling (MIS) to achieve robustness and versatility. Typically, the balance or power heuristics are used, mostly thanks to the seemingly ...strong guarantees on their variance. We show that these MIS heuristics are oblivious to the effect of certain variance reduction techniques like stratification. This shortcoming is particularly pronounced when unstratified and stratified techniques are combined (e.g., in a bidirectional path tracer). We propose to enhance the balance heuristic by injecting variance estimates of individual techniques, to reduce the variance of the combined estimator in such cases. Our method is simple to implement and introduces little overhead.
"How can we animate 3D-characters from a movie script or move robots by simply telling them what we would like them to do?" "How unstructured and complex can we make a sentence and still generate ...plausible movements from it?" These are questions that need to be answered in the long-run, as the field is still in its infancy. Inspired by these problems, we present a new technique for generating compositional actions, which handles complex input sentences. Our output is a 3D pose sequence depicting the actions in the input sentence. We propose a hierarchical two-stream sequential model to explore a finer joint-level mapping between natural language sentences and 3D pose sequences corresponding to the given motion. We learn two manifold representations of the motion, one each for the upper body and the lower body movements. Our model can generate plausible pose sequences for short sentences describing single actions as well as long complex sentences describing multiple sequential and compositional actions. We evaluate our proposed model on the publicly available KIT Motion-Language Dataset containing 3D pose data with human-annotated sentences. Experimental results show that our model advances the state-of-the-art on text-based motion synthesis in objective evaluations by a margin of 50%. Qualitative evaluations based on a user study indicate that our synthesized motions are perceived to be the closest to the ground-truth motion captures for both short and compositional sentences.
Multiple Importance Sampling (MIS) is a key technique for achieving robustness of Monte Carlo estimators in computer graphics and other fields. We derive optimal weighting functions for MIS that ...provably minimize the variance of an MIS estimator, given a set of sampling techniques. We show that the resulting variance reduction over the balance heuristic can be higher than predicted by the variance bounds derived by Veach and Guibas, who assumed only non-negative weights in their proof. We theoretically analyze the variance of the optimal MIS weights and show the relation to the variance of the balance heuristic. Furthermore, we establish a connection between the new weighting functions and control variates as previously applied to mixture sampling. We apply the new optimal weights to integration problems in light transport and show that they allow for new design considerations when choosing the appropriate sampling techniques for a given integration problem.
Monitoring of gait patterns by insoles is popular to study behavior and activity in the daily life of people and throughout the rehabilitation process of patients. Live data analyses may improve ...personalized prevention and treatment regimens, as well as rehabilitation. The M-shaped plantar pressure curve during the stance phase is mainly defined by the loading and unloading slope, 2 maxima, 1 minimum, as well as the force during defined periods. When monitoring gait continuously, walking uphill or downhill could affect this curve in characteristic ways.
For walking on a slope, typical changes in the stance phase curve measured by insoles were hypothesized.
In total, 40 healthy participants of both sexes were fitted with individually calibrated insoles with 16 pressure sensors each and a recording frequency of 100 Hz. Participants walked on a treadmill at 4 km/h for 1 minute in each of the following slopes: -20%, -15%, -10%, -5%, 0%, 5%, 10%, 15%, and 20%. Raw data were exported for analyses. A custom-developed data platform was used for data processing and parameter calculation, including step detection, data transformation, and normalization for time by natural cubic spline interpolation and force (proportion of body weight). To identify the time-axis positions of the desired maxima and minimum among the available extremum candidates in each step, a Gaussian filter was applied (σ=3, kernel size 7). Inconclusive extremum candidates were further processed by screening for time plausibility, maximum or minimum pool filtering, and monotony. Several parameters that describe the curve trajectory were computed for each step. The normal distribution of data was tested by the Kolmogorov-Smirnov and Shapiro-Wilk tests.
Data were normally distributed. An analysis of variance with the gait parameters as dependent and slope as independent variables revealed significant changes related to the slope for the following parameters of the stance phase curve: the mean force during loading and unloading, the 2 maxima and the minimum, as well as the loading and unloading slope (all P<.001). A simultaneous increase in the loading slope, the first maximum and the mean loading force combined with a decrease in the mean unloading force, the second maximum, and the unloading slope is characteristic for downhill walking. The opposite represents uphill walking. The minimum had its peak at horizontal walking and values dropped when walking uphill and downhill alike. It is therefore not a suitable parameter to distinguish between uphill and downhill walking.
While patient-related factors, such as anthropometrics, injury, or disease shape the stance phase curve on a longer-term scale, walking on slopes leads to temporary and characteristic short-term changes in the curve trajectory.
In this study, we present a novel strategy to the method of finite elements (FEM) of linear elastic problems of very high resolution on graphic processing units (GPU). The approach exploits ...regularities in the system matrix that occur in regular hexahedral grids to achieve cache-friendly matrix-free FEM. The node-by-node method lies in the class of block-iterative Gauss-Seidel multigrid solvers. Our method significantly improves convergence times in cases where an ordered distribution of distinct materials is present in the dataset. The method was evaluated on three real world datasets: An aluminum-silicon (AlSi) alloy and a dual phase steel material sample, both captured by scanning electron tomography, and a clinical computed tomography (CT) scan of a tibia. The caching scheme leads to a speed-up factor of ×2-×4 compared to the same code without the caching scheme. Additionally, it facilitates the computation of high-resolution problems that cannot be computed otherwise due to memory consumption.
The analysis of microscopy images has always been an important yet time consuming process in materials science. Convolutional Neural Networks (CNNs) have been very successfully used for a number of ...tasks, such as image segmentation. However, training a CNN requires a large amount of hand annotated data, which can be a problem for material science data. We present a procedure to generate synthetic data based on ad hoc parametric data modelling for enhancing generalization of trained neural network models. Especially for situations where it is not possible to gather a lot of data, such an approach is beneficial and may enable to train a neural network reasonably. Furthermore, we show that targeted data generation by adaptively sampling the parameter space of the generative models gives superior results compared to generating random data points.
The analysis of gait patterns and plantar pressure distributions
insoles is increasingly used to monitor patients and treatment progress, such as recovery after surgeries. Despite the popularity of ...pedography, also known as baropodography, characteristic effects of anthropometric and other individual parameters on the trajectory of the stance phase curve of the gait cycle have not been previously reported. We hypothesized characteristic changes of age, body height, body weight, body mass index and handgrip strength on the plantar pressure curve trajectory during gait in healthy participants. Thirty-seven healthy women and men with an average age of 43.65 ± 17.59 years were fitted with Moticon OpenGO insoles equipped with 16 pressure sensors each. Data were recorded at a frequency of 100 Hz during walking at 4 km/h on a level treadmill for 1 minute. Data were processed
a custom-made step detection algorithm. The loading and unloading slopes as well as force extrema-based parameters were computed and characteristic correlations with the targeted parameters were identified
multiple linear regression analysis. Age showed a negative correlation with the mean loading slope. Body height correlated with Fmean
and the loading slope. Body weight and the body mass index correlated with all analyzed parameters, except the loading slope. In addition, handgrip strength correlated with changes in the second half of the stance phase and did not affect the first half, which is likely due to stronger kick-off. However, only up to 46% of the variability can be explained by age, body weight, height, body mass index and hand grip strength. Thus, further factors must affect the trajectory of the gait cycle curve that were not considered in the present analysis. In conclusion, all analyzed measures affect the trajectory of the stance phase curve. When analyzing insole data, it might be useful to correct for the factors that were identified by using the regression coefficients presented in this paper.
Realistic rendering requires computing the global illumination in the scene, and Monte Carlo integration is the best‐known method for doing that. The key to good performance is to carefully select ...the costly integration samples, which is usually achieved via importance sampling. Unfortunately, visibility is difficult to factor into the importance distribution, which can greatly increase variance in highly occluded scenes with complex illumination. In this paper, we present importance caching – a novel approach that selects those samples with a distribution that includes visibility, while maintaining efficiency by exploiting illumination smoothness. At a sparse set of locations in the scene, we construct and cache several types of probability distributions with respect to a set of virtual point lights (VPLs), which notably include visibility. Each distribution type is optimized for a specific lighting condition. For every shading point, we then borrow the distributions from nearby cached locations and use them for VPL sampling, avoiding additional bias. A novel multiple importance sampling framework finally combines the many estimators. In highly occluded scenes, where visibility is a major source of variance in the incident radiance, our approach can reduce variance by more than an order of magnitude. Even in such complex scenes we can obtain accurate and low noise previews with full global illumination in a couple of seconds on a single mid‐range CPU.
Vehicle-to-Everything (V2X) communication, essential for enhancing road safety, driving efficiency, and traffic management, must be robust against cybersecurity threats for successful deployment and ...acceptance. This survey comprehensively explores V2X security challenges, focusing on prevalent cybersecurity threats such as jamming, spoofing, Distributed Denial of Service (DDoS), and eavesdropping attacks. These threats were selected due to their prevalence and ability to compromise the integrity and reliability of V2X systems. Jamming can disrupt communications, spoofing can lead to data and identity manipulation, DDoS attacks can saturate system resources, and eavesdropping can compromise user privacy and information confidentiality. Addressing these major threats ensures that V2X systems are robust and secure for successful deployment and widespread acceptance. This work makes significant contributions to the field of V2X cybersecurity, starting with a thorough review and categorization of existing survey papers, providing a clear map of the current research landscape, and identifying areas needing further study. An extensive review uncovered a global landscape of V2X cybersecurity research. We highlight contributions from the leading countries in scientific publications and patent innovations, with notable advancements from leading corporations. This work educates and informs on the current state of V2X cybersecurity and identifies emerging trends and future research directions based on a year-by-year analysis of the literature and patents. The findings underscore the evolving cybersecurity landscape in V2X systems and the importance of continued innovation and research in this critical field. The survey navigates the complexities of securing V2X communications, emphasizing the necessity for advanced security protocols and technologies, and highlights innovative approaches within the global scientific and patent research context. By providing a panoramic view of the field, this survey sets the stage for future advancements in V2X cybersecurity.
Monte Carlo rendering makes heavy use of mixture sampling and multiple importance sampling (MIS). Previous work has shown that control variates can be used to make such mixtures more efficient and ...more robust. However, the existing approaches failed to yield practical applications, chiefly because their underlying theory is based on the unrealistic assumption that a single mixture is optimized for a single integral. This is in stark contrast with rendering reality, where millions of integrals are computed---one per pixel---and each is infinitely recursive. We adapt and extend the theory introduced by previous work to tackle the challenges of real-world rendering applications. We achieve robust mixture sampling and (approximately) optimal MIS weighting for common applications such as light selection, BSDF sampling, and path guiding.