Abstract
Dual medium geometric model is widely used in soil and rock mass environment, which can only reflect the material exchange between cracks and matrix. The dual medium geometric model ignores ...the hydraulic relationship between the pores and cracks. The paper develops a new method to construct the three-dimensional dual medium geometry model. This method is completely written by Matlab scripting language, can describe the real spatial distribution state of fracture in bedrock. In the method, three-dimensional random fracture network geometric models with different parameters and three-dimensional porous media geometric models with Berlin Noise characteristics are constructed respectively, and then use the three-dimensional direct superposition method to construct the dual medium geometric model. The paper introduces the construction of three-dimensional random fracture network geometry model, three-dimensional porous media geometric model and the dual medium geometric model by direct superposition method. This research is of great significance to the construction of dual medium model and numerical simulation in related fields.
Reformulating linear physics using second kind Fredholm equations is very standard practice. One of the straightforward consequences is that the resulting integrals can be expanded (when the Neumann ...expansion converges) and probabilized, leading to path statistics and Monte Carlo estimations. An essential feature of these algorithms is that they also allow to estimate propagators for all types of sources, including initial conditions. The resulting practice is a single Monte Carlo run, for one given set of sources, producing propagators that can later be used with any other set of sources for fast simulations, typically as parts of optimization, inversion, sensitivity analysis and command control algorithms. The present paper illustrates how this practice can be extended to problems involving several interacting physics, provided that their coupling is only at the boundary of the system or at interfaces between sub-parts, and may itself be given the form of a second kind Fredholm equation. A full practical implementation is described as part of the Stardis code, with the example of transferring heat via the coupling of radiation, reaction-diffusion and convection as typically expected in the multidisciplinary context of urban climate modeling. Besides, we show how recent advances in computer graphics indicate that these algorithms can be made numerically extremely efficient when facing large CAD geometries: computing the propagator becomes strictly independent of the geometry refinement, i.e. is identical whatever the number of triangles and tetraedra used to numerize the surface and volume descriptions. To the best of our knowledge this is the first report of propagator computations that remains practical for coupled physics in large CAD geometries.
Program Title: Stardis 0.7.2 (built on stardis-solver 0.12.3)
CPC Library link to program files:https://doi.org/10.17632/k76zrx4n6b.1
Developer's repository link:https://www.meso-star.com/projects/stardis/stardis.html
Licensing provisions: GPLv3
Programming language: ANSI C and Python
Nature of problem: Estimating temperatures in coupled heat transfer systems involving large CAD and/or large numbers of spatially distributed sources.
Solution method: Stardis uses the Monte Carlo Method. Each temperature estimate constructs a propagator of each of the energy sources within the system: initial temperature, temperature boundary conditions, volume powers and surface fluxes. The propagator can be stored for further use outside the Monte Carlo code.
Additional comments including restrictions and unusual features: The Stardis estimates of propagators are only reliable when radiative transfer can be linearized around a reference temperature. Stardis can deal with the nonlinearity of radiation when computing temperatures, but then nothing can be interpreted as a meaningfull propagator for any external usage.
Python scripts were used to execute the different configurations, post-process the results and generate the different graphs.
The hybrid intelligent reflecting surface (IRS) architecture is a novel technology that leverages the advantages of both passive and active IRS; the passive IRS offers a large aperture, while the ...active IRS provides additional power amplification. Prior studies have shown that the optimal performance of IRS-assisted wireless networks is achieved when the passive IRS is deployed near the transceivers and the active IRS is near the receiver, assuming transceivers with limited height. However, most of the prior works on hybrid IRS blindly adopted this assumption in the IRS association policy, which essentially becomes a partial selection strategy that offers analytical simplicity at the cost of sub-optimal performance. This limitation motivated us to find the globally optimal deployment strategy for all types of IRS. To this end, we first employ the geometric models for integrated path loss distance (known as Cassini oval and Ellipse for product- and sum-distance path loss laws, respectively) and use them to determine the optimal locations of the hybrid IRS. Then, we design a novel opportunistic association policy for hybrid IRS based on the integrated path loss model. Furthermore, we validate our proposed methods through simulations and show that they significantly outperform the conventional nearest association policy, especially for hybrid and active IRS.
Fate decisions in developing tissues involve cells transitioning between discrete cell states, each defined by distinct gene expression profiles. The Waddington landscape, in which the development of ...a cell is viewed as a ball rolling through a valley filled terrain, is an appealing way to describe differentiation. To construct and validate accurate landscapes, quantitative methods based on experimental data are necessary. We combined principled statistical methods with a framework based on catastrophe theory and approximate Bayesian computation to formulate a quantitative dynamical landscape that accurately predicts cell fate outcomes of pluripotent stem cells exposed to different combinations of signaling factors. Analysis of the landscape revealed two distinct ways in which cells make a binary choice between one of two fates. We suggest that these represent archetypal designs for developmental decisions. The approach is broadly applicable for the quantitative analysis of differentiation and for determining the logic of developmental decisions.
Display omitted
•Quantified effect of signaling on fate decisions in an in vitro differentiation system•Constructed a Waddingtonian-like dynamical landscape model from the quantitative data•Identified two fundamentally distinct types of binary cell fate decisions•Landscape recapitulated experimental data and predicted new experimental outcomes
Fate decisions in developing tissues involve cells transitioning between discrete cell states. We developed an approach to construct a dynamical landscape from quantitative gene expression data, in which the development of a cell is represented by a trajectory through the landscape. Applying it to pluripotent stem cells exposed to different combinations of signaling factors accurately predicted cell fate outcomes. This revealed two distinct architectures for the way cells make a binary choice between one of two fates.
•Three generating principles are proposed for hub tooth surface of crown gear coupling.•These three finite element mesh models are built, respectively.•Loaded tooth contact analysis for crown gear ...coupling with misalignment is developed.•The geometries of these models are compared with each other.•Effect of misalignment on contact analysis is discussed for each model.
Three different generating principles of the hub, which is an important component of crown gear coupling, are proposed to generate the corresponding models in this paper. Model 1 and model 2 use the given geometries that the profile in all planes containing center of the displacement circle is the same as that in the middle section, and that the surface along the rotation axis is generated by a positive continuous modification, respectively. Without given geometries such as model 1 and model 2, model 3 is carried out by simulating the machining process actually formed by a form grinding wheel, and based on meshing theory. Then, finite element models of these three models are built to investigate the load distributions along the hub tooth surface. Finally, the geometry, tooth contact analysis results and load distribution among these models are discussed with an example. The results show that the surface generated by model 1 is the same as that obtained by model 3, but different from that obtained by model 2.
Geometric model fitting is a typical chicken-&-egg problem: data points should be clustered based on geometric proximity to models whose unknown parameters must be estimated at the same time. Most ...existing methods, including generalizations of
RANSAC
, greedily search for models with most inliers (within a threshold) ignoring overall classification of points. We formulate geometric multi-model fitting as an optimal labeling problem with a global energy function balancing geometric errors and
regularity
of inlier clusters. Regularization based on spatial coherence (on some near-neighbor graph) and/or label costs is NP hard. Standard combinatorial algorithms with guaranteed approximation bounds (e.g.
α
-expansion) can minimize such regularization energies over a finite set of labels, but they are not directly applicable to a continuum of labels, e.g.
in line fitting. Our proposed approach (
PEaRL
) combines model sampling from data points as in
RANSAC
with iterative re-estimation of inliers and models’ parameters based on a global regularization functional. This technique efficiently explores the continuum of labels in the context of energy minimization. In practice,
PEaRL
converges to a good quality local minimum of the energy automatically selecting a small number of models that best explain the whole data set. Our tests demonstrate that our energy-based approach significantly improves the current state of the art in geometric model fitting currently dominated by various greedy generalizations of
RANSAC
.
Polyimide (PI) nanofibrous aerogels (NFAs) have garnered significant attention for their exceptional mechanical and thermal properties, making them promising materials for heat insulation ...applications. However, there is a lack of comprehensive research on heat transfer of PI NFAs on the modeling and prediction. Therefore, three modified unit cells with the features of PI NFA backbone were constructed for the finite element simulation. Subsequently, the comparisons of our results with experimental data from available literature were conducted. It indicated that the cubic unit cell fitted well with the experimental values when the density was lower than
50
kg
/
m
3
, while the Weaire–Phelan unit cell demonstrated good agreement when the density was higher than
50
kg
/
m
3
. A series of parametric studies were performed and indicated that the thermal conductivity is proportional to the nanofiber diameter, whereas inversely proportional to the pore size and porosity. Our research will provide novel insights to the selection of parameters for industrial manufacturing of thermal insulation materials in aerospace, energy, protection, etc.
In this paper, the novel extended space-alternating generalized expectation-maximization (SAGE) algorithm, providing joint propagation channel multi-path component (MPC) estimation and scatterer ...localization underlying spherical-wavefront multipath model, is proposed. Two geometry-based models are aided for estimating the first and last hop scatterers under different bouncing orders. The performance of the proposed algorithm, as called geometry-aided SAGE (GA-SAGE), is illustrated by means of the Cramér-Rao lower bound derived for parameter estimates and the root-mean-square-estimation-errors (RMSEEs) obtained through and Monte-Carlo simulations, which shows the applicability both near- and far-field estimation. Finally, the GA-SAGE is applied to processing the experimental data obtained from measurements in an indoor office environment by using a single-input multiple-output (SIMO) configuration. The obtained results show that the method proposed outperforms the traditional SAGE algorithm in terms of MPCs estimation accuracy, convergence rate, and the extra capability of localizing scatterers involved in different bouncing order propagation paths. It is considered useful in environment sensing alike applications and makes the GA-SAGE an efficient and effective tool for the development of geometry-based stochastic channel models capable of reproducing channel realizations of the so-called spatial consistency or spatial non-stationarity, which are the basis for the design of transmission technologies using extremely-large antenna array (ELAA) for 5G and beyond.
Abstract This article takes a cognitive approach to natural concepts. The aim is to introduce criteria that are evaluated with respect to how they support the cognitive economy of humans when using ...concepts in reasoning and communicating with them. I first present the theory of conceptual spaces as a tool for expressing the criteria. Then I introduce the central idea that natural concepts correspond to convex regions of a conceptual space. I argue that this criterion has far-reaching consequences as regards natural concepts. Partly following earlier work, I present some other criteria that further delimit the class of natural concepts. One of these is coherence, which does not seem to have been discussed previously. Finally, I show that convexity and other criteria make it possible to ensure that people mean the same thing when they communicate using concepts. Apart from its philosophical interest, the analysis presented in the article will be relevant for tasks of conceptual engineering in artificial systems that work with concepts.
ZusammenfassungHintergrundBlutungen im Becken können zu einem Kreislaufproblem (C-Problem) führen. Die weit verbreitete Ganzkörper-CT-Traumaspirale (GKCT) im Rahmen der Schockraumbehandlung kann zwar ...eine Aussage über die Blutungsquelle (arteriell vs. venös/ossär) geben, die Volumenbestimmung eines intrapelvinen Hämatoms mittels Planimetrie ist jedoch aufwendig und kann nicht zur schnellen Abschätzung des Blutverlustes dienen. Vereinfachte Messverfahren mittels geometrischer Modelle sollen zur Abschätzung des Ausmaßes einer Blutungskomplikation dienen.Ziel der ArbeitEs ist zu prüfen, ob sich mittels vereinfachter Geometriemodelle ein intrapelvines Hämatomvolumen bei Tile-B- und -C-Frakturen schon während der Schockraumdiagnostik quantitativ schnell und zuverlässig bestimmen lässt.Material und MethodenEs wurden retrospektiv 42 intrapelvine Blutungen nach Tile-B- und -C-Beckenfrakturen (n = 8:B, 34:C) an 2 Traumazentren in Deutschland selektiert (66 % männlich, 33 % weiblich; Durchschnittsalter 42 ± 20 J) und die CT-Untersuchungen, die im Rahmen des initialen Ganzkörper-CTs angefertigt wurden, näher analysiert. Zur Auswertung standen Spiral-CT-Datensätze mit 1–5 mm Schichtdicke zur Verfügung. Durch die Flächenmarkierung (ROI) der Blutungsareale in den einzelnen Schnittbildern wurde das Volumen berechnet. Vergleichend wurden die Volumina mit vereinfachten geometrischen Figuren (Quader, Ellipsoid, ellipsoide Modifikation nach Kothari) errechnet. Durch die Berechnung der Abweichung der Volumina der geometrischen Modelle zu der planimetrisch ermittelten reellen Hämatomgröße wurde ein Korrekturfaktor bestimmt.Ergebnisse und DiskussionDas mediane planimetrische Blutungsvolumen im Gesamtkollektiv betrug 1710 ml (10–7152 ml). Relevante Beckenblutungen mit einem Gesamtvolumen > 100 ml bestanden bei 25 Patienten. In 42,86 % wurde das Volumen im Modell „Quader“ überschätzt, und in 13 Fällen (30,95 %) zeigte sich eine deutliche Unterschätzung zum planimetrisch gemessenen Volumen. Somit wurde das Volumenmodell ausgeschlossen. In den Modellen „Ellipse“ und „Messmethode nach Kothari“ konnte man eine Approximation an das planimetrisch bestimmte Volumen mit einem über eine multiple lineare Regressionsanalyse errechneten Korrekturfaktor erreichen. Durch die zeitsparende Quantifizierung des Hämatomvolumens mittels modifizierter ellipsoider Berechnung nach Kothari lässt sich, bei Anzeichen eines C‑Problems, das Blutungsausmaß im Becken nach Trauma beurteilen. Dieses Messverfahren, als einfaches reproduzierbares metrisches Mittel, könnte zukünftig in die Schockraumdiagnostik eingebettet werden.