Our mind seems to be formatted to imagine a "motionless" material particle (or more accurately a region of spacetime at rest relative to that material particle) fixed in space as an absolute ...reference and to consider the existence of space and time as independent universal entities, as we perceive when we measure them with a ruler and a clock, respectively. Instead, let us first hypothesize that singularities are the absolute reference for the universe, that the speed of light in vacuum (c = 299 792 458 m/s) is universal and absolute, and that deceleration/acceleration is the "driving force" that defines spacetime. Frames of reference anchored on material particles/ objects may be grouped as having zero acceleration (inertial frames), nonzero, constant acceleration (pseudoinertial frames) or nonzero, variable acceleration (noninertial proper frames), so that a noninertial proper frame may be considered as composed of an infinite number of pseudoinertial frames. The universality of c is ensured by its absolute constancy, when both observer and moving beam of photons are in frames of reference differing neither in speed nor in acceleration, as is the case within inertial or pseudoinertial frames, respectively. However, its value (as that of any other speed) is measured as different when the observer is at an inertial or pseudoinertial frame and the beam of photons is moving in another, thus differing in Lorentz factor (y) or acceleration, respectively. Speed, velocity, and acceleration (including the effect of gravity on a material particle/object) are all defined in terms of space and time. However, space and time do not exist as independent universal entities, as they are integral components of one and the same inextricable physical entity, spacetime. Assuming spacetime forms a continuum, it is expected that anything affecting one will affect the other. This has been widely recognized for gravity, but not for speed--something which becomes clearer if we assume that it should in fact be acceleration/ deceleration that moves a material particle from one frame of reference into another. Several conditions that are grouped under gravity-dependent and non-gravity-dependent act individually or in combination to submit a matter particle to acceleration. Therefore, when perceived from the Earth (or any other place at v < 299 792 458 m/s), any departure from c caused by deceleration should expand space and contract time. Any further deceleration or subsequent acceleration (for as long as v < c) will alter spacetime. Hence, in astrophysics, deceleration rather than acceleration should be the main driving or direct physical quantity. It is proposed that any deceleration/acceleration distorts spacetime. A method to combine the effects of several conditions that define the acceleration status of a material particle in spacetime is proposed. The higher v/c and/or gravity-dependent acceleration, the greater time dilation and space contraction will be. At the singularity, time dilation and space contraction will be maxima. The actual "position" of a material particle in spacetime may therefore be defined by the sum of all changes it suffered in deceleration/acceleration (due to changes in gravity-dependent and non-gravity-dependent accelerations) since the singularity. The impossibility of exceeding the speed of light in vacuum is discussed and tentatively demonstrated. In line with the present hypothesis, both time and space are relative, but only partially due to the limits imposed by c, opening the possibility for the concept of spacetime partial relativity. According to our proposal, spacetime is defined by a special form of deceleration, which we have termed relativistic or Lorentz deceleration: it is a deceleration-dependent elastic property of any matter particle moving at v < c. It expands/contracts precisely because of the effect of deceleration/acceleration on matter particles moving at v < c. According to these views, defining a singularity should depend on the perspective. When perceived from the Earth or any other place at v < c, a singularity is a spaceless point of infinite density, associated with the speed of light and where a second lasts forever. However, when "perceived" from within a singularity, space is there and time flows. Based on this work, a singularity may be defined in higher detail as the embryonic state of the universe (or a part of it), associated with two universal absolute constants: the speed of photons (c = 299 792 458 m/s) and the quantity of spacetime. It also corresponds to a standard spacetime condition, in which time is dilated and space contracted to maximum values. The Big Bang is interpreted not as an explosion or bang, but rather as an infinitesimal deceleration from the singularity, thus triggering the initial exponential space expansion and time contraction and leading to matter formation. Neither space nor time are created or destroyed--They are always there since the singularity but vary widely in contraction/expansion magnitude with each precise spacetime condition, as deceleration-dependent fluctuations take place in spacetime. Gravity, an inherent property of matter, and entropy are key players in the subsequent evolution of the universe. Uncountable successive and cumulative changes in deceleration suffered by material particles determine the precise conditions they occupy at each moment in spacetime, thus allowing the build-up of a gigantic and highly dynamic noninertial frame of reference, i.e., our universe. Individual observers at single points of the universe should all see light photons moving in vacuum at 299 792 458 m/s (or any material particle/object moving at v < 299 792 458 m/s) and perceive space and time identically, each within their own spacetime condition. Differences arise only when one observer "looks" at other observers in distinct spacetime conditions. An attempt is made to interpret spacetime and light when we "look" at frames of reference distinct from our own, since this is precisely what we "see" when we observe the universe from the Earth. Cosmological models are typically based on several assumptions. The hypothesis formulated in the present article is no different, with a singularity as its major reference. We conceptualize on a cosmological model that challenges some currently accepted views of the universe.
FOB vs modified Irwin: What are we doing? Gauvin, David V.; Zimmermann, Zachary J.
Journal of pharmacological and toxicological methods,
05/2019, Letnik:
97
Journal Article
Recenzirano
There is a general sentiment in the nonclinical safety assessment literature and the proponents of the International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human ...Use (ICH), that the “Modified Irwin” and the Functional Observation Battery are distinct and unique assays for the nonclinical assessment of the central nervous system (CNS). We identify and defend the position that the Irwin screen was developed as an FOB and both terms refer to a single, unitary functional assay. In giving credit to one prominent contributor to any one significant discipline of science for a specific assay, orientation, or theory may have an exclusionary influence on the merits of other prominent contributors within the same research arena. Scientific organizations as well as journal and textbook editors have attempted to unify the nomenclature used within a scientific discipline to make the disciplines conform to non-attributional surname nomenclatures. For example, the Salk-Sabin immunization is simply referred to as the polio vaccine. The “Skinner box” is now the “operant chamber” and “Pavlovian conditioning” is now “respondent conditioning”. In 1968, Samuel Irwin established an operational method of analysis used for measuring drug effects in purpose bred laboratory animals. We present and defend the view that the behavioral screening assay developed by Irwin is, for all intents and purposes, a functional observational battery (FOB). We take the position that in standardizing nomenclature without “surnames” the FOB is simply the contemporary name for the data collection system in use under the harmonized safety pharmacology guidelines.
Financial technology (FinTech) is widely recognised as important in addressing financial inclusion. However, limited research theorises how new entrants and incumbents work together in FinTech ...ecosystems to shape financial inclusion. We undertake a theory-generating case study with multilevel interacting organisations in Ghana, where, like many other African countries, the growth in FinTech has led to new opportunities for financial inclusion. We conceptualise three practices, as building blocks at the ecosystem level, through which incumbents and new entrants shape financial inclusion: (1) innovative and collaborative practices, (2) protectionist and equitable practices, and (3) legitimising and sustaining practices. We articulate a theoretical model that explains how the practices shape financial inclusion and propose three theoretical propositions of how financial inclusion in developing countries is being scaled and shaped in terms of actors, relationships, and practices.
A CFD study on the Irwin probe flows Brito, Pedro M.; Ferreira, Almerindo D.; Sousa, Antonio C.M.
Journal of wind engineering and industrial aerodynamics,
December 2021, 2021-12-00, Letnik:
219
Journal Article
Recenzirano
Irwin probes are surface sensors commonly used to determine the shear stress in near-wall regions, by calibration of related pressure signals. These devices have been widely used in wind tunnel ...studies dealing with pedestrian-level wind conditions. In this work, three-dimensional Computational Fluid Dynamics (CFD) simulations are performed to characterize the complex flow field and pressure distribution developed around and within the Irwin probe, at three distinct low-Reynolds number regimes. The emergence of coherent flow structures past the sensor is categorised and preventive spacing intervals, necessary to mitigate mutual interference, is numerically evaluated. The computations corroborate the original spacing recommendations by Irwin. For typical operation conditions, long streamwise tip vortices are observed in the wake of a single sensor. In addition, the predictions suggest the existence of a low-Reynolds number threshold, below which such structures are supressed, resulting in diminished interference effects. Numerical results also suggest a nearly uniform axial pressure distribution and place in evidence very interesting flow field characteristics.
•Irwin probes are used to measure wind speed and shear stress close to surfaces.•CFD methods are used to examine flow features inside and in the wake of the sensor.•Mutual interference effects are quantified and tolerable spacing intervals reviewed.•Inner-sensor pressure distribution is characterized in detail.
In this work, we propose an extension of the Irwin-Hall distribution to obtain the distribution of sum X + Y when the random variables X, Y are connected by a copula
. Assuming that
is the copula of ...random vector
, we then use the proposed distribution to study the perturbation of copula
. Theoretical results are illustrated by several examples.
Summary
One of the fundamental but unsolved problems in fracture mechanics of solid materials is how to describe the continuous and discontinuous deformation of continuum in a unified manner. This ...paper aims to develop a novel nonlocal geometric fracture theory attempting to address this essential issue. Using the concepts of nonlocal calculus, a new nonlocal equation for the deformation of continuum is obtained by minimizing a nonlocal Lagrangian. This is done to ensure that the proposed nonlocal model can be used to describe the continuous and discontinuous deformation of a continuum in a unified geometric manner, as well as to be asymptotically compatible with the classical continuum theory. Inspired by the phase field fracture theory, a nonlocal geometric crack surface functional is proposed to approximate the Griffith fracture energy in continuum. Note, however, that a distinguished feature of the proposed functional is that it enables strong discontinuities appear in the crack phase fields, which is difficult for the phase field fracture theory. After obtaining the nonlocal fracture energy, a nonlocal elastic energy is phenomenologically decomposed into volumetric and deviatoric parts, and then a system of nonlocal geometric fracture equations is derived to describe the damage nucleation, cracks initiation and propagation, which covers the whole physical process of fracture in solid materials. A staggered discontinuous Galerkin method is developed to reach an accurate and stable numerical solution. Numerical examples demonstrate the proposed theory can well capture the transition from the continuous to discontinuous deformation of the continuum and accurately describe the topological evolution of cracks in a unified framework.