The present paper is dealing with the possibility offered by a Roman imperial coin hoard to discover, on one hand, the constant feature of human nature to make mistakes during working activity, and ...on the other hand, the fact that any new hoard may come up with new coin types or variants. The hoard Potaissa III (in Roman Dacia; nowadays, Turda, Romania) brought up various peculiar coins, that are discussed in detail here, that reveal: tricky monetary policy; engraving mistakes; engraver’s skill; new coin types; new coin type variants and post-minting human behaviour upon the coin.
A paper is devoted to a publication of a rare for a Northern Black Sea region find – a half of an imitation of a Roman Republican denarius serratus revealed in Olbia Pontica in 2003 in cultural ...strata during the excavations of the «L-1» area, a Central part of a Citadel on the plateau the Upper City. The item supposed to be a barbarian imitation of Roman denarius serratus of 81 BC of a Geto-Dacian minting. The coin find in Olbia, most likely, is connected with the historical events, described by Dio Chrysostom, about the Getae attack on the polis that took place c. the mid-1st century BC.
Over time, many minted coins were withdrawn from circulation, being replaced with new ones. The returned, obsolete metal coins were melted in order to ensure sustainable reuse of the alloy for other ...purposes. Between the withdrawal and melting, some of the metal coins were canceled by the destruction of their original shape and dimensions using adequate tools. The first part of this paper is focused on presenting some insights into the canceling method used on old Romanian nickel coins; also, some examples are presented. The introduction also includes a literature review in the field of coin manufacturing, covering subjects such as metal behavior under striking load and aspects of 3D modeling and FEM analysis as well as explaining some striking errors. The main purpose of this paper is to study the particularities of canceling methods applied to coins, which is conducted on relatively valuable collection metal pieces. In the second part of the paper, an adequate 3D model is computed for the canceling dies and the coin. Then, the assembled models are introduced, corresponding to each canceling case, consisting of the obverse and reverse canceling dies with coins inside them. For each model, the finite element analysis is realized and is achieved for different initial conditions. The final part of the paper presents the analysis results as well as the discussion and conclusions.
We study entropy as a measure of randomness in a sequential clinical trial. For any randomization design, we consider the well-known sequence of conditional treatment assignment probabilities, Pn, ...which complements the random allocation proportion. We then use Pn to formulate and prove a statement about conditions for the asymptotic mean entropy of a randomization design to achieve its optimum value. We compare both response-adaptive and non response adaptive randomization designs with respect to the asymptotic mean entropy, connecting the variance of Pn to the asymptotic mean entropy under normality. Then we explore whether both Pn, as well as the allocation proportion, may have zero asymptotic variance.
•Entropy is proposed as a measure of the uncertainty investigators have about the next treatment in the sequential clinical trial.•This sequence Pn is used to prove two results about the asymptotic mean entropy of sequential clinical trial allocation schemes.•The first concerns the optimal entropy, and the second connects entropy to the variance of Pn under normality.•The two results are used to evaluate existing sequential allocation schemes with regard to entropy.•Recommendations for optimizing entropy as a barrier to selection bias in sequential clinical trials are discussed.
Inference Under Covariate-Adaptive Randomization Bugni, Federico A.; Canay, Ivan A.; Shaikh, Azeem M.
Journal of the American Statistical Association,
01/2018, Letnik:
113, Številka:
524
Journal Article
Recenzirano
Odprti dostop
This article studies inference for the average treatment effect in randomized controlled trials with covariate-adaptive randomization. Here, by covariate-adaptive randomization, we mean randomization ...schemes that first stratify according to baseline covariates and then assign treatment status so as to achieve "balance" within each stratum. Our main requirement is that the randomization scheme assigns treatment status within each stratum so that the fraction of units being assigned to treatment within each stratum has a well behaved distribution centered around a proportion π as the sample size tends to infinity. Such schemes include, for example, Efron's biased-coin design and stratified block randomization. When testing the null hypothesis that the average treatment effect equals a prespecified value in such settings, we first show the usual two-sample t-test is conservative in the sense that it has limiting rejection probability under the null hypothesis no greater than and typically strictly less than the nominal level. We show, however, that a simple adjustment to the usual standard error of the two-sample t-test leads to a test that is exact in the sense that its limiting rejection probability under the null hypothesis equals the nominal level. Next, we consider the usual t-test (on the coefficient on treatment assignment) in a linear regression of outcomes on treatment assignment and indicators for each of the strata. We show that this test is exact for the important special case of randomization schemes with
, but is otherwise conservative. We again provide a simple adjustment to the standard errors that yields an exact test more generally. Finally, we study the behavior of a modified version of a permutation test, which we refer to as the covariate-adaptive permutation test, that only permutes treatment status for units within the same stratum. When applied to the usual two-sample t-statistic, we show that this test is exact for randomization schemes with
and that additionally achieve what we refer to as "strong balance." For randomization schemes with
, this test may have limiting rejection probability under the null hypothesis strictly greater than the nominal level. When applied to a suitably adjusted version of the two-sample t-statistic, however, we show that this test is exact for all randomization schemes that achieve "strong balance," including those with
. A simulation study confirms the practical relevance of our theoretical results. We conclude with recommendations for empirical practice and an empirical illustration. Supplementary materials for this article are available online.
In this paper, we discuss coin-weighing problems that use a 5-way scale which has five different possible outcomes: MUCH LESS, LESS, EQUAL, MORE, and MUCH MORE. The 5-way scale provides more ...information than the regular 3-way scale. We study the problem of finding two fake coins from a pile of identically looking coins in a minimal number of weighings using a 5-way scale. We discuss similarities and differences between the 5-way and 3-way scale. We introduce a strategy for a 5-way scale that can find both counterfeit coins among 2
coins in k + 1 weighings, which is better than any strategy for a 3-way scale.
Quantum walk (QW) is the quantum analog of the random walk. QW is an integral part of the development of numerous quantum algorithms. Hence, an in-depth understanding of QW helps us to grasp the ...quantum algorithms. We revisit the one-dimensional discrete-time QW and discuss basic steps in detail by incorporating the most general coin operator, constant in both space and time, and a localized initial state using numerical modeling. We investigate the impact of each parameter of the general coin operator on the probability distribution of the quantum walker. We show that by tuning the parameters of the general coin, one can regulate the probability distribution of the walker. We provide an algorithm for the one-dimensional quantum walk driven by the general coin operator. The study on general coin operators also includes the popular coins — Hadamard, Grover, and Fourier.
In an initial coin offering (ICO), new ventures raise capital by selling tokens to a crowd of investors. Often, this token is a cryptocurrency, a digital medium of value exchange based on the ...distributed ledger technology. Both the number of ICOs and the amount of capital raised have exploded since 2017. Despite attracting significant attention from ventures, investors, and policy makers, little is known about the dynamics of ICOs. This initial study therefore assesses the determinants of the amount raised in 423 ICOs. Drawing on signaling theory, the study explores the role of signaling ventures' technological capabilities in ICOs. The results show that technical white papers and high-quality source codes increase the amount raised, while patents are not associated with increased amounts of funding. Exploring further determinants of the amount raised, the results indicate that some of the underlying mechanisms in ICOs resemble those found in prior research into entrepreneurial finance, while others are unique to the ICO context. The study's implications are multifold and discussed in detail. Importantly, the results enable investors to more accurately understand crucial determinants of the amount raised (e.g., technical white papers, source code quality, token supply, Ethereum-standard). This reduces the considerable uncertainty that investors face when investing in ICOs and enables more informed decision-making.
•Empirical study of 423 ICOs carried out between 2016 and 2018•Draws on signaling theory to explore the role technological capabilities in ICOs•Technological white papers and high-quality sources code increase the amount raised.•Patents do not seem to have a significant influence on the amount raised.•Identifies various determinants of the amount raised unique to the context of ICOs