The uniformly nanosized NiO were prepared via chemical co-precipitation method. Electrochemical characterization of the NiO nanoparticles showed the initial specific discharge capacity reaches 900 ...mAh/g at 0.1 A/g and 500 mAh/g at 2 A/g, which reveals that the as-synthesized material could be potential candidate of anode material for LIBS.
Cancer stem cell-like side population (SP) cells, which may be responsible for recurrence, tumor metastasis, and resistance to cancer therapy, have been identified and characterized in several types ...of cell lines from gastric cancer. However, there is no report on isolation of SP cells from human gastric cancer cell line HGC-27. This study aims to analyze the proportion of SP cells in HGC-27 cell line, differentiate SP from non-side population (NSP) cells, and determine whether the SP cells have certain biological properties of stem cells.
(1) HGC-27 suspension was prepared and stained with Hoechst33342 and PI for flow cytometric isolation of SP (2). Differences in proliferation and stemness-related gene expression profiles (CD133, CD44, OCT-4, MDR1, EpCAM, and ABCG2) between SP and NSP cells were detected by gastric formation assay and quantitative real-time PCR (3). Oncogenicity of SP and NSP cells was determined in nude mice in vivo.
(1) SP cells accounted for 0.1-1.0% of HGC-27 cells, and decreased to 0% after verapamil inhibition. Using flow cytometry, we sorted 7.5×10⁵ SP cells and most HGC-27 cells were NSP cells (2). Gastric formation assay and MTT demonstrated that there was a significant difference in proliferation between SP and NSP cells. Gene expression analysis showed that the expression of genes was significantly higher in SP cells (3). The oncogenicity experiment in nude mice revealed that 105 SP cells were able to form tumors, which demonstrated higher tumorigenicity than non-SP cells.
These results collectively suggested that SP cells from HGC-27 cell line have some cancer stem cell properties and could be used for studying the pathogenesis of gastric cancer, which may contribute to discovery of novel therapeutic targets.
Existing open-vocabulary image segmentation methods require a fine-tuning step on mask labels and/or image-text datasets. Mask labels are labor-intensive, which limits the number of categories in ...segmentation datasets. Consequently, the vocabulary capacity of pre-trained VLMs is severely reduced after fine-tuning. However, without fine-tuning, VLMs trained under weak image-text supervision tend to make suboptimal mask predictions. To alleviate these issues, we introduce a novel recurrent framework that progressively filters out irrelevant texts and enhances mask quality without training efforts. The recurrent unit is a two-stage segmenter built upon a frozen VLM. Thus, our model retains the VLM's broad vocabulary space and equips it with segmentation ability. Experiments show that our method outperforms not only the training-free counterparts, but also those fine-tuned with millions of data samples, and sets the new state-of-the-art records for both zero-shot semantic and referring segmentation. Concretely, we improve the current record by 28.8, 16.0, and 6.9 mIoU on Pascal VOC, COCO Object, and Pascal Context.
This paper presents OxfordTVG-HIC (Humorous Image Captions), a large-scale dataset for humour generation and understanding. Humour is an abstract, subjective, and context-dependent cognitive ...construct involving several cognitive factors, making it a challenging task to generate and interpret. Hence, humour generation and understanding can serve as a new task for evaluating the ability of deep-learning methods to process abstract and subjective information. Due to the scarcity of data, humour-related generation tasks such as captioning remain under-explored. To address this gap, OxfordTVG-HIC offers approximately 2.9M image-text pairs with humour scores to train a generalizable humour captioning model. Contrary to existing captioning datasets, OxfordTVG-HIC features a wide range of emotional and semantic diversity resulting in out-of-context examples that are particularly conducive to generating humour. Moreover, OxfordTVG-HIC is curated devoid of offensive content. We also show how OxfordTVG-HIC can be leveraged for evaluating the humour of a generated text. Through explainability analysis of the trained models, we identify the visual and linguistic cues influential for evoking humour prediction (and generation). We observe qualitatively that these cues are aligned with the benign violation theory of humour in cognitive psychology.
Rapid advancements in continual segmentation have yet to bridge the gap of scaling to large continually expanding vocabularies under compute-constrained scenarios. We discover that traditional ...continual training leads to catastrophic forgetting under compute constraints, unable to outperform zero-shot segmentation methods. We introduce a novel strategy for semantic and panoptic segmentation with zero forgetting, capable of adapting to continually growing vocabularies without the need for retraining or large memory costs. Our training-free approach, kNN-CLIP, leverages a database of instance embeddings to enable open-vocabulary segmentation approaches to continually expand their vocabulary on any given domain with a single-pass through data, while only storing embeddings minimizing both compute and memory costs. This method achieves state-of-the-art mIoU performance across large-vocabulary semantic and panoptic segmentation datasets. We hope kNN-CLIP represents a step forward in enabling more efficient and adaptable continual segmentation, paving the way for advances in real-world large-vocabulary continual segmentation methods.