As seen in "a NULL" Wired and "a NULL" Time
A revealing look at how negative biases against women of color are embedded in search engine results and algorithms
Run a Google search for "black ...girls"-what will you find? "Big Booty" and other sexually explicit terms are likely to come up as top search terms. But, if you type in "white girls," the results are radically different. The suggested porn sites and un-moderated discussions about "why black women are so sassy" or "why black women are so angry" presents a disturbing portrait of black womanhood in modern society.
In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.
Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance-operating as a source for email, a major vehicle for primary and secondary school learning, and beyond-understanding and reversing these disquieting trends and discriminatory practices is of utmost importance.
An original, surprising and, at times, disturbing account of bias on the internet, Algorithms of Oppression contributes to our understanding of how racism is created, maintained, and disseminated in the 21st century.
Safiya Noble discusses search engine bias in an interview with USC Annenberg School for Communication and Journalism
Considering the use of dynamical systems in practical applications, often only limited regions in the time or frequency domain are of interest. Therefore, it usually pays off to compute local ...approximations of the used dynamical systems in the frequency and time domain. In this paper, we consider a structure-preserving extension of the frequency- and time-limited balanced truncation methods to second-order dynamical systems. We give a full overview about the first-order limited balanced truncation methods and extend those to second-order systems by using the different second-order balanced truncation formulas from the literature. Also, we present numerical methods for solving the arising large-scale sparse matrix equations and give numerical modifications to deal with the problematic case of second-order systems. The results are then illustrated on three numerical examples.
Truncation is a statistical phenomenon that occurs in many time‐to‐event studies. For example, autopsy‐confirmed studies of neurodegenerative diseases are subject to an inherent left and right ...truncation, also known as double truncation. When the goal is to study the effect of risk factors on survival, the standard Cox regression model cannot be used when the survival time is subject to truncation. Existing methods that adjust for both left and right truncation in the Cox regression model require independence between the survival times and truncation times, which may not be a reasonable assumption in practice. We propose an expectation‐maximization algorithm to relax the independence assumption in the Cox regression model under left, right, or double truncation to an assumption of conditional independence on the observed covariates. The resulting regression coefficient estimators are consistent and asymptotically normal. We demonstrate through extensive simulations that the proposed estimator has little bias and has a similar or lower mean‐squared error compared to existing estimators. We implement our approach to assess the effect of occupation on survival in subjects with autopsy‐confirmed Alzheimer's disease.
In studies on lifetimes, occasionally, the population contains statistical units that are born before the data collection has started. Left-truncated are units that deceased before this start. For ...all other units, the age at the study start often is recorded and we aim at testing whether this second measurement is independent of the genuine measure of interest, the lifetime. Our basic model of dependence is the one-parameter Gumbel–Barnett copula. For simplicity, the marginal distribution of the lifetime is assumed to be Exponential and for the age-at-study-start, namely the distribution of birth dates, we assume a Uniform. Also for simplicity, and to fit our application, we assume that units that die later than our study period, are also truncated. As a result from point process theory, we can approximate the truncated sample by a Poisson process and thereby derive its likelihood. Identification, consistency and asymptotic distribution of the maximum-likelihood estimator are derived. Testing for positive truncation dependence must include the hypothetical independence which coincides with the boundary of the copula’s parameter space. By non-standard theory, the maximum likelihood estimator of the exponential and the copula parameter is distributed as a mixture of a two- and a one-dimensional normal distribution. For the proof, the third parameter, the unobservable sample size, is profiled out. An interesting result is, that it differs to view the data as truncated sample, or, as simple sample from the truncated population, but not by much. The application are 55 thousand double-truncated lifetimes of German businesses that closed down over the period 2014 to 2016. The likelihood has its maximum for the copula parameter at the parameter space boundary so that the p-value of test is 0.5. The life expectancy does not increase relative to the year of foundation. Using a Farlie–Gumbel–Morgenstern copula, which models positive and negative dependence, finds that life expectancy of German enterprises even decreases significantly over time. A simulation under the condition of the application suggests that the tests retain the nominal level and have good power.
Linear truncations package for Macaulay2 Cranton Heller, Lauren; Nemati, Navid
The journal of software for algebra and geometry,
12/2022, Volume:
12, Issue:
1
Journal Article
We introduce a new framework for quantifying correlated uncertainties of the infinite-matter equation of state derived from chiral effective field theory (χEFT). Bayesian machine learning via ...Gaussian processes with physics-based hyperparameters allows us to efficiently quantify and propagate theoretical uncertainties of the equation of state, such as χEFT truncation errors, to derived quantities. We apply this framework to state-of-the-art many-body perturbation theory calculations with nucleon-nucleon and three-nucleon interactions up to fourth order in the χEFT expansion. This produces the first statistically robust uncertainty estimates for key quantities of neutron stars. We give results up to twice nuclear saturation density for the energy per particle, pressure, and speed of sound of neutron matter, as well as for the nuclear symmetry energy and its derivative. At nuclear saturation density, the predicted symmetry energy and its slope are consistent with experimental constraints.
Full text
Available for:
CMK, CTK, FMFMET, NUK, UL
Detection of double Joint Photographic Experts Group (JPEG) compression is an important part of image forensics. Although methods in the past studies have been presented for detecting the double JPEG ...compression with a different quantization matrix, the detection of double JPEG compression with the same quantization matrix is still a challenging problem. In this paper, an effective method to detect the recompression in the color images by using the conversion error, rounding error, and truncation error on the pixel in the spherical coordinate system is proposed. The randomness of truncation errors, rounding errors, and quantization errors result in random conversion errors. The pixel number of the conversion error is used to extract six-dimensional features. Truncation error and rounding error on the pixel in its three channels are mapped to the spherical coordinate system based on the relation of a color image to the pixel values in the three channels. The former is converted into amplitude and angles to extract 30-dimensional features and 8-dimensional auxiliary features are extracted from the number of special points and special blocks. As a result, a total of 44-dimensional features have been used in the classification by using the support vector machine (SVM) method. Thereafter, the support vector machine recursive feature elimination (SVMRFE) method is used to improve the classification accuracy. The experimental results show that the performance of the proposed method is better than the existing methods.