Adversarial Optimization provides a reliable, practical way to match two implicitly defined distributions, one of which is typically represented by a sample of real data, and the other is represented ...by a parameterized generator. Matching of the distributions is achieved by minimizing a divergence between these distribution, and estimation of the divergence involves a secondary optimization task, which, typically, requires training a model to discriminate between these distributions. The choice of the model has its trade-off: high-capacity models provide good estimations of the divergence, but, generally, require large sample sizes to be properly trained. In contrast, low-capacity models tend to require fewer samples for training; however, they might provide biased estimations. Computational costs of Adversarial Optimization becomes significant when sampling from the generator is expensive. One of the practical examples of such settings is fine-tuning parameters of complex computer simulations. In this work, we introduce a novel family of divergences that enables faster optimization convergence measured by the number of samples drawn from the generator. The variation of the underlying discriminator model capacity during optimization leads to a significant speed-up. The proposed divergence family suggests using low-capacity models to compare distant distributions (typically, at early optimization steps), and the capacity gradually grows as the distributions become closer to each other. Thus, it allows for a significant acceleration of the initial stages of optimization. This acceleration was demonstrated on two fine-tuning problems involving Pythia event generator and two of the most popular black-box optimization algorithms: Bayesian Optimization and Variational Optimization. Experiments show that, given the same budget, adaptive divergences yield results up to an order of magnitude closer to the optimum than Jensen-Shannon divergence. While we consider physics-related simulations, adaptive divergences can be applied to any stochastic simulation.
In this paper we propose a novel modification of Contrastive Language-Image Pre-Training (CLIP) guidance for the task of unsupervised backlit image enhancement. Our work builds on the ...state-of-the-art CLIP-LIT approach, which learns a prompt pair by constraining the text-image similarity between a prompt (negative/positive sample) and a corresponding image (backlit image/well-lit image) in the CLIP embedding space. Learned prompts then guide an image enhancement network. Based on the CLIP-LIT framework, we propose two novel methods for CLIP guidance. First, we show that instead of tuning prompts in the space of text embeddings, it is possible to directly tune their embeddings in the latent space without any loss in quality. This accelerates training and potentially enables the use of additional encoders that do not have a text encoder. Second, we propose a novel approach that does not require any prompt tuning. Instead, based on CLIP embeddings of backlit and well-lit images from training data, we compute the residual vector in the embedding space as a simple difference between the mean embeddings of the well-lit and backlit images. This vector then guides the enhancement network during training, pushing a backlit image towards the space of well-lit images. This approach further dramatically reduces training time, stabilizes training and produces high quality enhanced images without artifacts, both in supervised and unsupervised training regimes. Additionally, we show that residual vectors can be interpreted, revealing biases in training data, and thereby enabling potential bias correction.
With growing abilities of generative models, artificial content detection becomes an increasingly important and difficult task. However, all popular approaches to this problem suffer from poor ...generalization across domains and generative models. In this work, we focus on the robustness of AI-generated image (AIGI) detectors. We analyze existing state-of-the-art AIGI detection methods based on frozen CLIP embeddings and show how to interpret them, shedding light on how images produced by various AI generators differ from real ones. Next we propose two ways to improve robustness: based on removing harmful components of the embedding vector and based on selecting the best performing attention heads in the image encoder model. Our methods increase the mean out-of-distribution (OOD) classification score by up to 6% for cross-model transfer. We also propose a new dataset for AIGI detection and use it in our evaluation; we believe this dataset will help boost further research. The dataset and code are provided as a supplement.
The dependence of "biological effect-dose" in experiments on mature outbreed mice (females) is studied during 30 days after low intensity X-ray action at the total dose less than 1 mGy. The index ...(relative mass) of the murine spleen was decreased in the group of mice which have the low value of the spleen index in the control. It is shown that the value of these parameters in groups of irradiated mice is mainly due to the total X-ray doses. The content of the lipid peroxidation products and the amount of the extracellular DNA in blood plasma of the irradiated mice are mainly determined by the change of dose rate during irradiation. The absence of linear interrelation between changes of all studied parameters and the total radiation dose is revealed. The data obtained allow us to suggest the relative mass of the murine spleen, the content of the lipid peroxidation products and the amount of the extracellular DNA as a test for estimation of the biological consequences of X-ray radiation at low doses and changing dose rate.
Due to the rapid development of large language models, people increasingly often encounter texts that may start as written by a human but continue as machine-generated. Detecting the boundary between ...human-written and machine-generated parts of such texts is a challenging problem that has not received much attention in literature. We attempt to bridge this gap and examine several ways to adapt state of the art artificial text detection classifiers to the boundary detection setting. We push all detectors to their limits, using the Real or Fake text benchmark that contains short texts on several topics and includes generations of various language models. We use this diversity to deeply examine the robustness of all detectors in cross-domain and cross-model settings to provide baselines and insights for future research. In particular, we find that perplexity-based approaches to boundary detection tend to be more robust to peculiarities of domain-specific data than supervised fine-tuning of the RoBERTa model; we also find which features of the text confuse boundary detection algorithms and negatively influence their performance in cross-domain settings.
Adversarial Optimization (AO) provides a reliable, practical way to match two implicitly defined distributions, one of which is usually represented by a sample of real data, and the other is defined ...by a generator. Typically, AO involves training of a high-capacity model on each step of the optimization. In this work, we consider computationally heavy generators, for which training of high-capacity models is associated with substantial computational costs. To address this problem, we introduce a novel family of divergences, which varies the capacity of the underlying model, and allows for a significant acceleration with respect to the number of samples drawn from the generator. We demonstrate the performance of the proposed divergences on several tasks, including tuning parameters of a physics simulator, namely, Pythia event generator.
A new class of substances exhibiting radioprotective and radiosensitizing effects depending on the concentration of the substance has been found. The radioprotective effect is probably due to the ...resonant absorption of radiation energy and its transformation into low-energy forms, as well as reactions with water radiolysis products. We studied the effects of 2,5-difeniloxazole and di2-(5-feniloxazolil)benzene in various concentrations in conjunction with irradiation on the growth of melanoma B-16 in mice and the average time of their lives. When using individual doses of irradiation and doses of preparations, we observed an increase in the average lifetime of mice and a reduced tumor size. These data allow us to conclude about the possibility of using these substances in the radiotherapy of tumors.
Synthetic antioxidant potassium phenosan in ultralow doses administrated in combination with antitumor antibiotic adriamycin in a therapeutic dose (8 mg/kg) markedly prolonged the mean life span of ...tumor-bearing animals compared to adriamycin monotherapy. This effect depended on the dose of antioxidant and was maximum at phenosan concentrations of 10(-17) and 10(-15) M. Potassium phenosan in these concentrations not only increased the mean life span, but also determined survival of 10-20% animals (as differentiated from adriamycin monotherapy).
Acetylcholinesterase (ACE) activity and lipid peroxidation (LPO) parameters were measured in the blood of patients with Alzheimer's disease (AD) during treatment with amiridine and gliatiline. ...Treatment was accompanied by inhibition of ACE. There was a statistically significant relationship between clinical efficacy and changes in ACE activity. AD was charactefized by significant changes in LPO parameters, with a three-fold increase in the level of primary oxidation products on the background of a sharp (seven-fold) increase in total lipid desaturatedness. There was a statistically significant relationship between ACE activity and the levels of primary oxidation products in the RBC of patients with AD before and after treatment with amiridine and gliatiline.