The United States has been a space power since its founding,
Gordon Fraser writes. The white stars on its flag reveal the dream
of continental elites that the former colonies might constitute a
"new ...constellation" in the firmament of nations. The streets and
avenues of its capital city were mapped in reference to celestial
observations. And as the nineteenth century unfolded, all efforts
to colonize the North American continent depended upon the science
of surveying, or mapping with reference to celestial movement.
Through its built environment, cultural mythology, and exercise of
military power, the United States has always treated the cosmos as
a territory available for exploitation.
In Star Territory Fraser explores how from its
beginning, agents of the state, including President John Adams,
Admiral Charles Henry Davis, and astronomer Maria Mitchell,
participated in large-scale efforts to map the nation onto cosmic
space. Through almanacs, maps, and star charts, practical
information and exceptionalist mythologies were transmitted to the
nation's soldiers, scientists, and citizens.
This is, however, only one part of the story Fraser tells. From
the country's first Black surveyors, seamen, and publishers to the
elected officials of the Cherokee Nation and Hawaiian resistance
leaders, other actors established alternative cosmic communities.
These Black and indigenous astronomers, prophets, and printers
offered ways of understanding the heavens that broke from the work
of the U.S. officials for whom the universe was merely measurable
and exploitable.
Today, NASA administrators advocate public-private partnerships
for the development of space commerce while the military seeks to
control strategic regions above the atmosphere. If observers
imagine that these developments are the direct offshoots of a
mid-twentieth-century space race, Fraser brilliantly demonstrates
otherwise. The United States' efforts to exploit the cosmos, as
well as the resistance to these efforts, have a history that starts
nearly two centuries before the Gemini and Apollo missions of the
1960s.
The United States has been a space power since its founding, Gordon Fraser writes. The white stars on its flag reveal the dream of continental elites that the former colonies might constitute a "new ...constellation" in the firmament of nations. The streets and avenues of its capital city were mapped in reference to celestial observations. And as the nineteenth century unfolded, all efforts to colonize the North American continent depended upon the science of surveying, or mapping with reference to celestial movement. Through its built environment, cultural mythology, and exercise of military power, the United States has always treated the cosmos as a territory available for exploitation.In Star Territory Fraser explores how from its beginning, agents of the state, including President John Adams, Admiral Charles Henry Davis, and astronomer Maria Mitchell, participated in large- scale efforts to map the nation onto cosmic space. Through almanacs, maps, and star charts, practical information and exceptionalist mythologies were transmitted to the nation's soldiers, scientists, and citizens.This is, however, only one part of the story Fraser tells. From the country's first Black surveyors, seamen, and publishers to the elected officials of the Cherokee Nation and Hawaiian resistance leaders, other actors established alternative cosmic communities. These Black and indigenous astronomers, prophets, and printers offered ways of understanding the heavens that broke from the work of the U.S. officials for whom the universe was merely measurable and exploitable.Today, NASA administrators advocate public-private partnerships for the development of space commerce while the military seeks to control strategic regions above the atmosphere. If observers imagine that these developments are the direct offshoots of a mid-twentieth-century space race, Fraser brilliantly demonstrates otherwise. The United States' efforts to exploit the cosmos, as well as the resistance to these efforts, have a history that starts nearly two centuries before the Gemini and Apollo missions of the 1960s.
Evaluating and Improving Fault Localization Pearson, Spencer; Campos, Jose; Just, Rene ...
2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE),
05/2017
Conference Proceeding
Most fault localization techniques take as input a faulty program, and produce as output a ranked list of suspicious code locations at which the program may be defective. When researchers propose a ...new fault localization technique, they typically evaluate it on programs with known faults. The technique is scored based on where in its output list the defective code appears. This enables the comparison of multiple fault localization techniques to determine which one is better. Previous research has evaluated fault localization techniques using artificial faults, generated either by mutation tools or manually. In other words, previous research has determined which fault localization techniques are best at finding artificial faults. However, it is not known which fault localization techniques are best at finding real faults. It is not obvious that the answer is the same, given previous work showing that artificial faults have both similarities to and differences from real faults. We performed a replication study to evaluate 10 claims in the literature that compared fault localization techniques (from the spectrum-based and mutation-based families). We used 2995 artificial faults in 6 real-world programs. Our results support 7 of the previous claims as statistically significant, but only 3 as having non-negligible effect sizes. Then, we evaluated the same 10 claims, using 310 real faults from the 6 programs. Every previous result was refuted or was statistically and practically insignificant. Our experiments show that artificial faults are not useful for predicting which fault localization techniques perform best on real faults. In light of these results, we identified a design space that includes many previously-studied fault localization techniques as well as hundreds of new techniques. We experimentally determined which factors in the design space are most important, using an overall set of 395 real faults. Then, we extended this design space with new techniques. Several of our novel techniques outperform all existing techniques, notably in terms of ranking defective code in the top-5 or top-10 reports.
When the Nazis came to power in 1933, they immediately expelled Jewish academics, unwittingly changing the power balance of world science. When war came, these scientific refugees raced to engineer ...the atomic bomb, to prevent Nazi Germany getting there first.
Many software engineering problems have been addressed with search algorithms. Search algorithms usually depend on several parameters (e.g., population size and crossover rate in genetic algorithms), ...and the choice of these parameters can have an impact on the performance of the algorithm. It has been formally proven in the No Free Lunch theorem that it is impossible to
tune
a search algorithm such that it will have optimal settings for all possible problems. So, how to properly set the parameters of a search algorithm for a given software engineering problem? In this paper, we carry out the largest empirical analysis so far on parameter tuning in search-based software engineering. More than one million experiments were carried out and statistically analyzed in the context of test data generation for object-oriented software using the
EvoSuite
tool. Results show that tuning does indeed have impact on the performance of a search algorithm. But, at least in the context of test data generation, it does not seem easy to find good settings that significantly outperform the “default” values suggested in the literature. This has very practical value for both researchers (e.g., when different techniques are compared) and practitioners. Using “default” values is a reasonable and justified choice, whereas parameter tuning is a long and expensive process that might or might not pay off in the end.
The research community has long recognized a complex interrelationship between fault detection, test adequacy criteria, and test set size. However, there is substantial confusion about whether and ...how to experimentally control for test set size when assessing how well an adequacy criterion is correlated with fault detection and when comparing test adequacy criteria. Resolving the confusion, this paper makes the following contributions: (1) A review of contradictory analyses of the relationships between fault detection, test adequacy criteria, and test set size. Specifically, this paper addresses the supposed contradiction of prior work and explains why test set size is neither a confounding variable, as previously suggested, nor an independent variable that should be experimentally manipulated. (2) An explication and discussion of the experimental designs of prior work, together with a discussion of conceptual and statistical problems, as well as specific guidelines for future work. (3) A methodology for comparing test adequacy criteria on an equal basis, which accounts for test set size without directly manipulating it through unrealistic stratification. (4) An empirical evaluation that compares the effectiveness of coverage-based testing, mutation-based testing, and random testing. Additionally, this paper proposes probabilistic coupling, a methodology for assessing the representativeness of a set of test goals for a given fault and for approximating the fault-detection probability of adequate test sets.
Dynamic specification mining observes program executions to infer models of normal program behavior. What makes us believe that we have seen sufficiently many executions? The TAUTOKO ("Tautoko" is ...the Mãori word for "enhance, enrich.") typestate miner generates test cases that cover previously unobserved behavior, systematically extending the execution space, and enriching the specification. To our knowledge, this is the first combination of systematic test case generation and typestate mining-a combination with clear benefits: On a sample of 800 defects seeded into six Java subjects, a static typestate verifier fed with enriched models would report significantly more true positives and significantly fewer false positives than the initial models.