Lipid bilayer membranes are often represented as a continuous nonpolar slab with a certain thickness bounded by two more polar interfaces. Phenomena such as peptide binding to the membrane surface, ...folding, insertion, translocation, and diffusion are typically interpreted on the basis of this view. In this Perspective, I argue that this membrane representation as a hydrophobic continuum solvent is not adequate to understand peptide–lipid interactions. Lipids are not small compared to membrane-active peptides: their sizes are similar. Therefore, peptide diffusion needs to be understood in terms of free volume, not classical continuum mechanics; peptide solubility or partitioning in membranes cannot be interpreted in terms of hydrophobic mismatch between membrane thickness and peptide length; peptide folding and translocation, often involving cationic peptides, can only be understood if realizing that lipids adapt to the presence of peptides and the membrane may undergo considerable lipid redistribution in the process. In all of those instances, the detailed molecular interactions between the peptide residues and the lipid components are essential to understand the mechanisms involved.
The determination and the meaning of interactions in lipid bilayers are discussed and interpreted through the Ising model. Originally developed to understand phase transitions in ferromagnetic ...systems, the Ising model applies equally well to lipid bilayers. In the case of a membrane, the essence of the Ising model is that each lipid is represented by a site on a lattice and that the interaction of each site with its nearest neighbors is represented by an energy parameter ω. To calculate the thermodynamic properties of the system, such as the enthalpy, the Gibbs energy, and the heat capacity, the partition function is derived. The calculation of the configurational entropy factor in the partition function, however, requires approximations or the use of Monte Carlo (MC) simulations. Those approximations are described. Ultimately, MC simulations are used in combination with experiment to determine the interaction parameters ω in lipid bilayers. Several experimental approaches are described, which can be used to obtain interaction parameters. They include nearest-neighbor recognition, differential scanning calorimetry, and Förster resonance energy transfer. Those approaches are most powerful when used in combination of MC simulations of Ising models. Lipid membranes of different compositions are discussed, which have been studied with these approaches. They include mixtures of cholesterol, saturated (ordered) phospholipids, and unsaturated (disordered) phospholipids. The interactions between those lipid species are examined as a function of molecular properties such as the degree of unsaturation and the acyl chain length. The general rule that emerges is that interactions between different lipids are usually unfavorable. The exception is that cholesterol interacts favorably with saturated (ordered) phospholipids. However, the interaction of cholesterol with unsaturated phospholipids becomes extremely unfavorable as the degree of unsaturation increases.
A Case for Partitioned Bloom Filters Almeida, Paulo Sergio
IEEE transactions on computers,
2023-June-1, 2023-6-1, Volume:
72, Issue:
6
Journal Article
Peer reviewed
Open access
In a partitioned Bloom Filter (PBF) the bit vector is split into disjoint parts, one per hash function. Contrary to hardware designs, where they prevail, software implementations mostly ignore PBFs, ...considering them worse than standard Bloom filters (SBF), due to the slightly larger false positive rate (FPR). In this paper, by performing an in-depth analysis, first we show that the FPR advantage of SBFs is smaller than thought; more importantly, by deriving the per-element FPR, we show that SBFs have weak spots in the domain: elements that test as false positives much more frequently than expected. This is relevant in scenarios where an element is tested against many filters. Moreover, SBFs are prone to exhibit extremely weak spots if naive double hashing is used, something occurring in mainstream libraries. PBFs exhibit a uniform distribution of the FPR over the domain, with no weak spots, even using naive double hashing. Finally, we survey scenarios beyond set membership testing, identifying many advantages of having disjoint parts, in designs using SIMD techniques, for filter size reduction, test of set disjointness, and duplicate detection in streams. PBFs are better, and should replace SBFs, in general purpose libraries and as the base for novel designs.
The mutual interactions between lipids in bilayers are reviewed, including mixtures of phospholipids, and mixtures of phospholipids and cholesterol (Chol). Binary mixtures and ternary mixtures are ...considered, with special emphasis on membranes containing Chol, an ordered phospholipid, and a disordered phospholipid. Typically the ordered phospholipid is a sphingomyelin (SM) or a long-chain saturated phosphatidylcholine (PC), both of which have high phase transitions temperatures; the disordered phospholipid is 1-palmitoyl-2-oleoylphosphatidylcholine (POPC) or dioleoylphosphatidylcholine (DOPC). The unlike nearest-neighbor interaction free energies (
ω
AB) between lipids (including Chol), obtained by an variety of unrelated methods, are typically in the range of 0–400 cal/mol in absolute value. Most are positive, meaning that the interaction is unfavorable, but some are negative, meaning it is favorable. It is of special interest that favorable interactions occur mainly between ordered phospholipids and Chol. The interpretation of domain formation in complex mixtures of Chol and phospholipids in terms of phase separation or condensed complexes is discussed in the light of the values of lipid mutual interactions.
Background:
Evidence shows that religiosity and spirituality (R/S) are highly used in critical moments of life and that these beliefs are associated with clinical outcomes. However, further studies ...are needed to assess these beliefs during the COVID-19 pandemic.
Aims:
To evaluate the use of R/S during the COVID-19 pandemic in Brazil and to investigate the association between R/S and the mental health consequences of social isolation.
Methods:
Cross-sectional study conducted in May 2020. Online surveys were carried out assessing sociodemographics, R/S measures, and social isolation characteristics and mental health consequences (hopefulness, fear, worrying and sadness). Adjusted regression models were used.
Results:
A total of 485 participants were included from all regions of Brazil. There was a high use of religious and spiritual beliefs during the pandemic and this use was associated with better mental health outcomes. Lower levels of worrying were associated with greater private religious activities (OR = 0.466, CI 95%: 0.307–0.706), religious attendance (OR = 0.587, CI 95%: 0.395–0.871), spiritual growth (OR = 0.667, CI 95%: 0.448–0.993) and with an increase in religious activities (OR = 0.660, CI 95%: 0.442–0.986); lower levels of fear were associated with greater private religious activities (OR = 0.632, CI 95%: 0.422–0.949) and spiritual growth (OR = 0.588, CI 95%: 0.392–0.882) and, lower levels of sadness (OR = 0.646, CI 95%: 0.418–0.997) were associated with spiritual growth. Finally, hope was associated with all R/S variables in different degrees (ranging from OR = 1.706 to 3.615).
Conclusions:
R/S seem to have an important role on the relief of suffering, having an influence on health outcomes and minimizing the consequences of social isolation. These results highlight the importance of public health measures that ensure the continuity of R/S activities during the pandemic and the training of healthcare professionals to address these issues.
Distributed data aggregation is an important task, allowing the decentralized determination of meaningful global properties, which can then be used to direct the execution of other applications. The ...resulting values are derived by the distributed computation of functions like Count , Sum , and Average . Some application examples deal with the determination of the network size, total storage capacity, average load, majorities and many others. In the last decade, many different approaches have been proposed, with different trade-offs in terms of accuracy, reliability, message and time complexity. Due to the considerable amount and variety of aggregation algorithms, it can be difficult and time consuming to determine which techniques will be more appropriate to use in specific settings, justifying the existence of a survey to aid in this task. This work reviews the state of the art on distributed data aggregation algorithms, providing three main contributions. First, it formally defines the concept of aggregation, characterizing the different types of aggregation functions. Second, it succinctly describes the main aggregation techniques, organizing them in a taxonomy. Finally, it provides some guidelines toward the selection and use of the most relevant techniques, summarizing their principal characteristics.
•We present a robust image dataset for parking space classification.•We evaluate textural-based descriptors for parking space detection.•Classifiers are evaluated on different parking ...lots.•Classifiers are evaluated on parking lots that were used for training.
Outdoor parking lot vacancy detection systems have attracted a great deal of attention in the last decade due the large number of practical applications. However, a common problem that researchers in this field very often face is the lack of a representative dataset to perform their experiments. To mitigate this difficulty, in this paper we introduce a new parking lot dataset composed of 695,899 images captured from two parking lots with three different camera views. The acquisition protocol allows obtaining static images showing illumination variance related to sunny, overcast and rainy days. We believe that researchers will find this dataset a very useful tool since it allows future benchmarking and evaluation. The dataset is currently available for research purposes upon request. To gain a better insight into this dataset we have evaluated two textural descriptors, Local Binary Patterns and Local Phase Quantization, with a Support Vector Machine classifier to detect parking lot vacancy. In the experiments where the same view was used for both training and testing, we have reached outstanding recognition rates, greater than 99%. The main challenge, though, lies in building a general classifier that is able to detect parking spaces from the parking lots that were not used for training. In this sense, the best result achieved by the texture-based classifier was about 89%. The observed drop in terms of performance shows that additional investigation is necessary to create classification schemes less dependent on the training set. Other researchers can use these results as a baseline performance when testing their own algorithms on this dataset.
The mechanisms of six different antimicrobial, cytolytic, and cell-penetrating peptides, including some of their variants, are discussed and compared. The specificity of these polypeptides varies; ...however, they all form amphipathic α-helices when bound to membranes, and there are no striking differences in their sequences. We have examined the thermodynamics and kinetics of their interaction with phospholipid vesicles, namely, binding and peptide-induced dye efflux. The thermodynamics of binding calculated using the Wimley−White interfacial hydrophobicity scale are in good agreement with the values derived from experiment. The generally accepted view that binding affinity determines functional specificity is also supported by experiments in model membranes. We now propose the hypothesis that it is the thermodynamics of the insertion of the peptide into the membrane, from a surface-bound state, that determine the mechanism.
Traffic congestion is a major concern in urban centers, as it can affect society, the environment, and the economy. There are many studies on the use of computational intelligence (CI) to improve ...mobility in urban centers. Some of these researches focus on developing strategies for traffic light programming, since traffic coordination is complex due to its many parameters, variables, and dynamic behavior, and also an inefficient traffic control plan can lead to increased delays and contribute to traffic congestion. Although there are many works in the literature on strategies for traffic control, there are still some contributions and gaps to be filled, especially because some studies do not consider the automatic optimization of traffic signals in real time, that is, according to the demand of vehicles on the roads, considering multiple objectives and the use of a network of intersections in their experiments. In addition, some of the proposed models are not independent of simulation to evaluate the solutions of CI algorithms, resulting in a more complex deployment in real situations. In this context, this paper presents a new method to optimize traffic light plan in a network of intersections and in real time, called Active Control of Traffic Signals (ACTS) associated with the Non-Dominated Sorting Genetic Algorithm, considering multiple objectives in the optimization process (minimizing the average delay time and the number of vehicles stops per cycle). To test the applicability of the model, a real dataset of vehicle demand collected by the Company of Transport and Traffic of Belo Horizonte (BHTrans) is loaded into the AIMSUN simulator, then the method is applied and compared with the current traffic control plan used by BHTrans. The results show that the ACTS method reduces the average vehicle delay by almost half compared to the results obtained with the current solution used by BHTrans. In real life, this means less time spent in traffic, which promotes faster traffic flow, reducing traffic congestions.
•We show that most DCS methods can be adapted to concept drift scenarios.•A time dependency is modeled according to the concept drift nature (real/virtual).•We discuss the impact of the pool pruning ...and introduce the concept diversity idea.•The DCS is tested under real and virtual concept drift scenarios.•The PKLot dataset is used as a real world concept drift benchmark.
One popular approach employed to tackle classification problems in a static environment consists in using a Dynamic Classifier Selection (DCS)-based method to select a custom classifier/ensemble for each test instance according to its neighborhood in a validation set, where the selection can be considered region-dependent. This idea can be extended to concept drift scenarios, where the distribution or the a posteriori probabilities may change over time. Nevertheless, in these scenarios, the classifier selection becomes not only region but also time-dependent. By adding a time dependency, in this work, we hypothesize that any DCS-based approach can be used to handle concept drift problems. Since some regions may not be affected by a concept drift, we introduce the idea of concept diversity, which shows that a pool containing classifiers trained under different concepts may be beneficial when dealing with concept drift problems through a DCS approach. The impacts of pruning mechanisms are discussed and seven well-known DCS methods are evaluated in the proposed framework, using a robust experimental protocol based on 12 common concept drift problems with different properties, and the PKLot dataset considering an experimental protocol specially designed in this work to test concept drift methods. The experimental results have shown that the DCS approach comes out ahead in terms of stability, i.e., it performs well in most cases requiring almost no parameter tuning.