The paper focuses on cooperative games where the worth of any coalition of agents is determined by the optimal value of a matching problem on (possibly weighted) graphs. These games come in different ...forms that can be grouped in two broad classes, namely of matching and allocation games, and they have a wide spectrum of applications, ranging from two-sided markets where buyers and sellers are encoded as vertices in a graph, to allocation problems where indivisible goods have to be assigned (matched) to agents in a fair way, possibly using monetary compensations.
The Shapley value and the related notion of Banzhaf value have often been identified as appropriate solution concepts for many applications of matching/allocation games, but their computation is intractable in general. It is known that these concepts can be computed in polynomial time for matching games on unweighted trees and on graphs having degree at most two. However, it was open whether or not such positive results could be extended to the more general case of graphs having bounded treewidth, and to the case of allocation problems on weighted graphs.
The paper provides a positive answer to these questions, by showing that computing the Shapley value and the Banzhaf value is feasible in polynomial time for the following classes of games: matching games over unweighted graphs having bounded treewidth, allocation games over weighted graphs having bounded treewidth, and allocation games over weighted graphs and such that each good is of interest for two agents at most. Without such structural restrictions, computing these solution concepts on allocation games is instead shown to be #P-hard, even in the case of unweighted graphs.
Coalitional games model scenarios where players can collaborate by forming coalitions in order to obtain higher worths than by acting in isolation. A fundamental issue of coalitional games is to ...single out the most desirable outcomes in terms of worth distributions, usually called solution concepts. Since decisions taken by realistic players cannot involve unbounded resources, recent computer science literature advocated the importance of assessing the complexity of computing with solution concepts. In this context, the paper provides a complete picture of the complexity issues arising with three prominent solution concepts for coalitional games with transferable utility, namely, the core, the kernel, and the bargaining set, whenever the game worth-function is represented in some reasonably compact form. The starting points of the investigation are the settings of graph games and of marginal contribution nets, where the worth of any coalition can be computed in polynomial time in the size of the game encoding and for which various open questions were stated in the literature. The paper answers these questions and, in addition, provides new insights on succinctly specified games, by characterizing the computational complexity of the core, the kernel, and the bargaining set in relevant generalizations and specializations of the two settings. Concerning the generalizations, the paper shows that dealing with arbitrary polynomial-time computable worth functions—no matter of the specific game encoding being considered—does not provide any additional source of complexity compared to graph games and marginal contribution nets. Instead, only for the core, a slight increase in complexity is exhibited for classes of games whose worth functions encode
NP-hard optimization problems, as in the case of certain combinatorial games. As for specializations, the paper illustrates various tractability results on classes of bounded treewidth graph games and marginal contribution networks.
Analyzing and predicting the dynamics of opinion formation in the context of social environments are problems that attracted much attention in literature. While grounded in social psychology, these ...problems are nowadays popular within the artificial intelligence community, where opinion dynamics are often studied via game-theoretic models in which individuals/agents hold opinions taken from a fixed set of discrete alternatives, and where the goal is to find those configurations where the opinions expressed by the agents emerge as a kind of compromise between their innate opinions and the social pressure they receive from the environments. As a matter of facts, however, these studies are based on very high-level and sometimes simplistic formalizations of the social environments, where the mental state of each individual is typically encoded as a variable taking values from a Boolean domain. To overcome these limitations, the paper proposes a framework generalizing such discrete preference games by modeling the reasoning capabilities of agents in terms of weighted propositional logics. It is shown that the framework easily encodes different kinds of earlier approaches and fits more expressive scenarios populated by conformist and dissenter agents. Problems related to the existence and computation of stable configurations are studied, under different theoretical assumptions on the structural shape of the social interactions and on the class of logic formulas that are allowed. Remarkably, during its trip to identify some relevant tractability islands, the paper devises a novel technical machinery whose significance goes beyond the specific application to analyzing opinion formation and diffusion, since it significantly enlarges the class of Integer Linear Programs that were known to be tractable so far.
Background The cyclin D1-cyclin dependent kinases (CDK)4/6 inhibitor palbociclib in combination with endocrine therapy shows remarkable efficacy in the management of estrogen receptor (ER)-positive ...and HER2-negative advanced breast cancer (BC). Nevertheless, resistance to palbociclib frequently arises, highlighting the need to identify new targets toward more comprehensive therapeutic strategies in BC patients. Methods BC cell lines resistant to palbociclib were generated and used as a model system. Gene silencing techniques and overexpression experiments, real-time PCR, immunoblotting and chromatin immunoprecipitation studies as well as cell viability, colony and 3D spheroid formation assays served to evaluate the involvement of the G protein-coupled estrogen receptor (GPER) in the resistance to palbociclib in BC cells. Molecular docking simulations were also performed to investigate the potential interaction of palbociclib with GPER. Furthermore, BC cells co-cultured with cancer-associated fibroblasts (CAFs) isolated from mammary carcinoma, were used to investigate whether GPER signaling may contribute to functional cell interactions within the tumor microenvironment toward palbociclib resistance. Finally, by bioinformatics analyses and k-means clustering on clinical and expression data of large cohorts of BC patients, the clinical significance of novel mediators of palbociclib resistance was explored. Results Dissecting the molecular events that characterize ER-positive BC cells resistant to palbociclib, the down-regulation of ERalpha along with the up-regulation of GPER were found. To evaluate the molecular events involved in the up-regulation of GPER, we determined that the epidermal growth factor receptor (EGFR) interacts with the promoter region of GPER and stimulates its expression toward BC cells resistance to palbociclib treatment. Adding further cues to these data, we ascertained that palbociclib does induce pro-inflammatory transcriptional events via GPER signaling in CAFs. Of note, by performing co-culture assays we demonstrated that GPER contributes to the reduced sensitivity to palbociclib also facilitating the functional interaction between BC cells and main components of the tumor microenvironment named CAFs. Conclusions Overall, our results provide novel insights on the molecular events through which GPER may contribute to palbociclib resistance in BC cells. Additional investigations are warranted in order to assess whether targeting the GPER-mediated interactions between BC cells and CAFs may be useful in more comprehensive therapeutic approaches of BC resistant to palbociclib. Keywords: Palbociclib, Resistance, Breast cancer, Estrogen receptor, G protein-coupled estrogen receptor (GPER), Cancer-associated fibroblasts (CAFs)
Analyzing gene expression profiles (GEP) through artificial intelligence provides meaningful insight into cancer disease. This study introduces DeepSHAP Autoencoder Filter for Genes Selection ...(DSAF-GS), a novel deep learning and explainable artificial intelligence-based approach for feature selection in genomics-scale data. DSAF-GS exploits the autoencoder’s reconstruction capabilities without changing the original feature space, enhancing the interpretation of the results. Explainable artificial intelligence is then used to select the informative genes for chronic lymphocytic leukemia prognosis of 217 cases from a GEP database comprising roughly 20,000 genes. The model for prognosis prediction achieved an accuracy of 86.4%, a sensitivity of 85.0%, and a specificity of 87.5%. According to the proposed approach, predictions were strongly influenced by CEACAM19 and PIGP, moderately influenced by MKL1 and GNE, and poorly influenced by other genes. The 10 most influential genes were selected for further analysis. Among them, FADD, FIBP, FIBP, GNE, IGF1R, MKL1, PIGP, and SLC39A6 were identified in the Reactome pathway database as involved in signal transduction, transcription, protein metabolism, immune system, cell cycle, and apoptosis. Moreover, according to the network model of the 3D protein-protein interaction (PPI) explored using the NetworkAnalyst tool, FADD, FIBP, IGF1R, QTRT1, GNE, SLC39A6, and MKL1 appear coupled into a complex network. Finally, all 10 selected genes showed a predictive power on time to first treatment (TTFT) in univariate analyses on a basic prognostic model including IGHV mutational status, del(11q) and del(17p), NOTCH1 mutations, β2-microglobulin, Rai stage, and B-lymphocytosis known to predict TTFT in CLL. However, only IGF1R hazard ratio (HR) 1.41, 95% CI 1.08-1.84, P=0.013), COL28A1 (HR 0.32, 95% CI 0.10-0.97, P=0.045), and QTRT1 (HR 7.73, 95% CI 2.48-24.04, P<0.001) genes were significantly associated with TTFT in multivariable analyses when combined with the prognostic factors of the basic model, ultimately increasing the Harrell’s c-index and the explained variation to 78.6% (versus 76.5% of the basic prognostic model) and 52.6% (versus 42.2% of the basic prognostic model), respectively. Also, the goodness of model fit was enhanced (χ2 = 20.1, P=0.002), indicating its improved performance above the basic prognostic model. In conclusion, DSAF-GS identified a group of significant genes for CLL prognosis, suggesting future directions for bio-molecular research.
The increased availability of high quality data from post disaster field reconnaissance, enabled the use of deep learning algorithms in the field of geotechnical earthquake engineering. The 2010-2011 ...Canterbury earthquake sequence in New Zealand caused significant damage due to abundant manifestation of liquefaction induced lateral spreading. The data available from this sequence is an ideal case study for deep learning analyses due to the amount and quality of information available through the New Zealand Geotechnical Database (NZGD). A dataset of about 7500 datapoints was collected and organized by the authors to develop a new Graph Neural Network (GNN) algorithm for lateral spreading in the Canterbury area. The comparison between predicted and observed data is performed using feed forward Neural Network. Several GNN models with different hyperparameters are explored and the best model is presented in this paper, and Explainable Artificial Intelligence is applied to the model that provides the best performance. These computationally expensive analyses were carried out utilizing cloud based computing capabilities offered by the Texas Advanced Computing Center (TACC) available to the natural hazard community through the cyberinfrastructure DesignSafe.
Linear temporal logic (LTL) is a modal logic where formulas are built over temporal operators relating events happening in different time instants. According to the standard semantics, LTL formulas ...are interpreted on traces spanning over an infinite timeline. However, applications related to the specification and verification of business processes have recently pointed out the need for defining and reasoning about a variant of LTL, which we name LTLp, whose semantics is defined over process traces, that is, over finite traces such that, at each time instant, precisely one propositional variable (standing for the execution of some given activity) evaluates true.
The paper investigates the theoretical underpinnings of LTLp and of a related logic formalism, named LTLf, which had already attracted attention in the literature and where formulas have the same syntax as in LTLp and are evaluated over finite traces, but without any constraint on the number of variables simultaneously evaluating true. The two formalisms are comparatively analyzed, by pointing out similarities and differences. In addition, a thorough complexity analysis has been conducted for reasoning problems about LTLp and LTLf, by considering arbitrary formulas as well as classes of formulas defined in terms of restrictions on the temporal operators that are allowed. Finally, based on the theoretical findings of the paper, a practical reasoner specifically tailored for LTLp and LTLf has been developed by leveraging state-of-the-art SAT solvers. The behavior of the reasoner has been experimentally compared with other systems available in the literature.
Adherence to Mediterranean diet (MD) and physical activity (PA) in adolescence represent powerful indicators of healthy lifestyles in adulthood. The aim of this longitudinal study was to investigate ...the impact of nutrition education program (NEP) on the adherence to the MD and on the inflammatory status in healthy adolescents, categorized into three groups according to their level of PA (inactivity, moderate intensity, and vigorous intensity). As a part of the DIMENU (Dieta Mediterranea & Nuoto) study, 85 adolescents (aged 14–17 years) participated in the nutrition education sessions provided by a team of nutritionists and endocrinologists at T0. All participants underwent anthropometric measurements, bio-impedentiometric analysis (BIA), and measurements of inflammatory biomarkers such as ferritin, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) levels. Data were collected at baseline (T0) and 6 months after NEP (T1). To assess the adherence to the MD, we used KIDMED score. In our adolescents, we found an average MD adherence, which was increased at T1 compared with T0 (T0: 6.03 ± 2.33 vs. T1: 6.96 ± 2.03,
p
= 0.002), with an enhanced percentage of adolescents with optimal (≥8 score) MD adherence over the study period (T0: 24.71% vs. T1: 43.52%,
p
= 0.001). Interestingly, in linear mixed-effects models, we found that NEP and vigorous-intensity PA levels independently influenced KIDMED score (β = 0.868,
p
< 0.0001 and β = 1.567,
p
= 0.009, respectively). Using ANOVA, NEP had significant effects on serum ferritin levels (
p
< 0.001), while either NEP or PA influenced ESR (
p
= 0.035 and 0.002, respectively). We also observed in linear mixed-effects models that NEP had a negative effect on ferritin and CRP (β = −14.763,
p
< 0.001 and β = −0.714,
p
= 0.02, respectively). Our results suggest the usefulness to promote healthy lifestyle, including either nutrition education interventions, or PA to improve MD adherence and to impact the inflammatory status in adolescence as a strategy for the prevention of chronic non-communicable diseases over the entire lifespan.
Coalitional games are mathematical models suited to study payoff distribution problems in cooperative scenarios. In abstract terms, a coalitional game can be specified by explicitly listing all ...possible—in fact, exponentially many—coalitions with their associated distributions. This naïve representation, however, quickly becomes infeasible over games involving many agents, thereby calling for suitable compact representations, that is, encoding mechanisms that (on some specific classes of games of interest) take an amount of space that grows polynomially with the number of agents. To date, a plethora of compact encodings have been already introduced and analyzed from the algorithm and computational viewpoints. Despite their specific technical differences, these encodings typically share the assumption that the utility associated with a coalition can be freely transferred among agents. Indeed, designing encoding mechanisms for the non-transferable utility (NTU) setting is a research issue that has been largely unexplored so far.
The paper addresses this issue by proposing a compact encoding for representing and reasoning about the outcomes of NTU coalitional games founding on answer set programming. By exploiting the expressiveness of this well-known knowledge representation formalism, it is shown that the proposed representation can succinctly encode several games of interest within a wide range of application domains. Computational issues arising in the setting have been studied too, by addressing questions related to certain payoff distributions enjoying desirable stability properties. Eventually, a prototype system supporting the proposed framework has been implemented by leveraging a state-of-the-art answer set engine, and results of a thorough experimental activity conducted on top of it have been discussed.
Most structural decomposition methods can be characterized through hypergraph games that are variations of the Robber and Cops graph game that characterizes the notion of treewidth. Decomposition ...trees correspond to monotone winning strategies of the cops, where the escape space of the robber on the hypergraph is shrunk monotonically. Cops using non-monotonic strategies are more powerful, but such strategies do not correspond to valid decompositions, in general. The paper provides a general way to exploit the power of non-monotonic strategies, by allowing a “greedy” form of non-monotonicity. It is shown that deciding the existence of a (non-monotone) greedy winning strategy (and compute one, if any) is tractable. Moreover, from greedy strategies we can compute valid decomposition trees efficiently. As a consequence, we are able to add power to structural methods and to obtain larger islands of tractability, such as the one based on the new notion of greedy hypertree decomposition.