Purpose
Intensity‐Modulated Radiation Therapy (IMRT), including its variations (including IMRT, Volumetric Arc Therapy (VMAT), and Tomotherapy), is a widely used and critically important technology ...for cancer treatment. It is a knowledge‐intensive technology due not only to its own technical complexity, but also to the inherently conflicting nature of maximizing tumor control while minimizing normal organ damage. As IMRT experience and especially the carefully designed clinical plan data are accumulated during the past two decades, a new set of methods commonly termed knowledge‐based planning (KBP) have been developed that aim to improve the quality and efficiency of IMRT planning by learning from the database of past clinical plans. Some of this development has led to commercial products recently that allowed the investigation of KBP in numerous clinical applications. In this literature review, we will attempt to present a summary of published methods of knowledge‐based approaches in IMRT and recent clinical validation results.
Methods
In March 2018, a literature search was conducted in the NIH Medline database using the PubMed interface to identify publications that describe methods and validations related to KBP in IMRT including variations such as VMAT and Tomotherapy. The search criteria were designed to have a broad scope to capture relevant results with high sensitivity. The authors filtered down the search results according to a predefined selection criteria by reviewing the titles and s first and then by reviewing the full text. A few papers were added to the list based on the references of the reviewed papers. The final set of papers was reviewed and summarized here.
Results
The initial search yielded a total of 740 articles. A careful review of the titles, s, and eventually the full text and then adding relevant articles from reviewing the references resulted in a final list of 73 articles published between 2011 and early 2018. These articles described methods for developing knowledge models for predicting such parameters as dosimetric and dose‐volume points, voxel‐level doses, and objective function weights that improve or automate IMRT planning for various cancer sites, addressing different clinical and quality assurance needs, and using a variety of machine learning approaches. A number of articles reported carefully designed clinical studies that assessed the performance of KBP models in realistic clinical applications. Overwhelming majority of the studies demonstrated the benefits of KBP in achieving comparable and often improved quality of IMRT planning while reducing planning time and plan quality variation.
Conclusions
The number of KBP‐related studies has been steadily increasing since 2011 indicating a growing interest in applying this approach to clinical applications. Validation studies have generally shown KBP to produce plans with quality comparable to expert planners while reducing the time and efforts to generate plans. However, current studies are mostly retrospective and leverage relatively small datasets. Larger datasets collected through multi‐institutional collaboration will enable the development of more advanced models to further improve the performance of KBP in complex clinical cases. Prospective studies will be an important next step toward widespread adoption of this exciting technology.
We are pleased to add this typescript to the Bone Marrow Transplantation Statistics Series. We realize the term cubic splines may be a bit off-putting to some readers, but stay with us and don't get ...lost in polynomial equations. What the authors describe is important conceptually and in practice. Have you ever tried to buy a new pair of hiking boots? Getting the correct fit is critical; shoes that are too small or too large will get you in big trouble! Now imagine if hiking shoes came in only 2 sizes, small and large, and your foot size was somewhere in between. You are in trouble. Sailing perhaps?Transplant physicians are often interested in the association between two variables, say pre-transplant measurable residual disease (MRD) test state and an outcome, say cumulative incidence of relapse (CIR). We typically reduce the results of an MRD test to a binary, negative or positive, often defined by an arbitrary cut-point. However, MRD state is a continuous biological variable, and reducing it to a binary discards what may be important, useful data when we try to correlate it with CIR. Put otherwise, we may miss the trees from the forest.Another way to look at splines is a technique to make smooth curves out of irregular data points. Consider, for example, trying to describe the surface of an egg. You could do it with a series of straight lines connecting points on the egg surface but a much better representation would be combining groups of points into curves and then combining the curves. To prove this try drawing an egg using the draw feature in Microsoft Powerpoint; you are making splines.Gauthier and co-workers show us how to use cubic splines to get the maximum information from data points, which may, unkindly, not lend themselves to dichotomization or a best fit line. Please read on. We hope readers will find their typescript interesting and exciting, and that it will give them a new way to think about how to analyse data. And no, a spline is not a bunch of cactus spines. Robert Peter Gale, Imperial College London, and Mei-Jie Zhang, Medical College of Wisconsin and CIBMTR.
With continued climate changes, soil drought stress has become the main limiting factor for crop growth in arid and semi‐arid regions. A typical characteristic of drought stress is the burst of ...reactive oxygen species (ROS), causing oxidative damage. Plant‐associated microbes, such as arbuscular mycorrhizal fungi (AMF), can regulate physiological and molecular responses to tolerate drought stress, and they have a strong ability to cope with drought‐induced oxidative damage via enhanced antioxidant defence systems. AMF produce a limited oxidative burst in the arbuscule‐containing root cortical cells. Similar to plants, AMF modulate a fungal network in enzymatic (e.g. GmarCuZnSOD and GintSOD1) and non‐enzymatic (e.g. GintMT1, GinPDX1 and GintGRX1) antioxidant defence systems to scavenge ROS. Plants also respond to mycorrhization to enhance stress tolerance via metabolites and the induction of genes. The present review provides an overview of the network of plant − arbuscular mycorrhizal fungus dialogue in mitigating oxidative stress. Future studies should involve identifying genes and transcription factors from both AMF and host plants in response to drought stress, and utilize transcriptomics, proteomics and metabolomics to clarify a clear dialogue mechanism between plants and AMF in mitigating oxidative burst.
The dialogue of arbuscular mycorrhizal fungi and host plants confers a mitigating drought‐induced oxidative burst in hosts.
Recently, generative steganography that transforms secret information to a generated image has been a promising technique to resist steganalysis detection. However, due to the inefficiency and ...irreversibility of the secret-to-image transformation, it is hard to find a good trade-off between the information hiding capacity and extraction accuracy. To address this issue, we propose a secret-to-image reversible transformation (S2IRT) scheme for generative steganography. The proposed S2IRT scheme is based on a generative model, i.e., Glow model, which enables a bijective-mapping between latent space with multivariate Gaussian distribution and image space with a complex distribution. In the process of S2I transformation, guided by a given secret message, we construct a latent vector and then map it to a generated image by the Glow model, so that the secret message is finally transformed to the generated image. Owing to good efficiency and reversibility of S2IRT scheme, the proposed steganographic approach achieves both high hiding capacity and accurate extraction of secret message from generated image. Furthermore, a separate encoding-based S2IRT (SE-S2IRT) scheme is also proposed to improve the robustness to common image attacks. The experiments demonstrate the proposed steganographic approaches can achieve high hiding capacity (up to 4 bpp ) and accurate information extraction (almost 100% accuracy rate) simultaneously, while maintaining desirable anti-detectability and imperceptibility.
Recent works on video salient object detection have demonstrated that directly transferring the generalization ability of image-based models to video data without modeling spatial-temporal ...information remains nontrivial and challenging. Considering both intraframe accuracy and interframe consistency of saliency detection, this article presents a novel cross-attention based encoder-decoder model under the Siamese framework (CASNet) for video salient object detection. A baseline encoder-decoder model trained with Lovász softmax loss function is adopted as a backbone network to guarantee the accuracy of intraframe salient object detection. Self- and cross-attention modules are incorporated into our model in order to preserve the saliency correlation and improve intraframe salient detection consistency. Extensive experimental results obtained by ablation analysis and cross-data set validation demonstrate the effectiveness of our proposed method. Quantitative results indicate that our CASNet model outperforms 19 state-of-the-art image- and video-based methods on six benchmark data sets.
To detect illegal copies of copyrighted images, recent copy detection methods mostly rely on the bag-of-visual-words (BOW) model, in which local features are quantized into visual words for image ...matching. However, both the limited discriminability of local features and the BOW quantization errors will lead to many false local matches, which make it hard to distinguish similar images from copies. Geometric consistency verification is a popular technology for reducing the false matches, but it neglects global context information of local features and thus cannot solve this problem well. To address this problem, this paper proposes a global context verification scheme to filter false matches for copy detection. More specifically, after obtaining initial scale invariant feature transform (SIFT) matches between images based on the BOW quantization, the overlapping region-based global context descriptor (OR-GCD) is proposed for the verification of these matches to filter false matches. The OR-GCD not only encodes relatively rich global context information of SIFT features but also has good robustness and efficiency. Thus, it allows an effective and efficient verification. Furthermore, a fast image similarity measurement based on random verification is proposed to efficiently implement copy detection. In addition, we also extend the proposed method for partial-duplicate image detection. Extensive experiments demonstrate that our method achieves higher accuracy than the state-of-the-art methods, and has comparable efficiency to the baseline method based on the BOW quantization.
A Review of Generalized Zero-Shot Learning Methods Pourpanah, Farhad; Abdar, Moloud; Luo, Yuxuan ...
IEEE transactions on pattern analysis and machine intelligence,
04/2023, Letnik:
45, Številka:
4
Journal Article
Recenzirano
Odprti dostop
Generalized zero-shot learning (GZSL) aims to train a model for classifying data samples under the condition that some output classes are unknown during supervised learning. To address this ...challenging task, GZSL leverages semantic information of the seen (source) and unseen (target) classes to bridge the gap between both seen and unseen classes. Since its introduction, many GZSL models have been formulated. In this review paper, we present a comprehensive review on GZSL. First, we provide an overview of GZSL including the problems and challenges. Then, we introduce a hierarchical categorization for the GZSL methods and discuss the representative methods in each category. In addition, we discuss the available benchmark data sets and applications of GZSL, along with a discussion on the research gaps and directions for future investigations.
In this paper, a new image indexing and retrieval algorithm using local mesh patterns are proposed for biomedical image retrieval application. The standard local binary pattern encodes the ...relationship between the referenced pixel and its surrounding neighbors, whereas the proposed method encodes the relationship among the surrounding neighbors for a given referenced pixel in an image. The possible relationships among the surrounding neighbors are depending on the number of neighbors, P. In addition, the effectiveness of our algorithm is confirmed by combining it with the Gabor transform. To prove the effectiveness of our algorithm, three experiments have been carried out on three different biomedical image databases. Out of which two are meant for computer tomography (CT) and one for magnetic resonance (MR) image retrieval. It is further mentioned that the database considered for three experiments are OASIS-MRI database, NEMA-CT database, and VIA/I-ELCAP database which includes region of interest CT images. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to LBP, LBP with Gabor transform, and other spatial and transform domain methods.
The usage of remote signals obtained from a wide-area measurement system (WAMS) introduces time delays to a wide-area damping controller (WADC), which would degrade system damping and even cause ...system instability. The time-delay margin is defined as the maximum time delay under which a closed-loop system can remain stable. In this paper, the delay margin is introduced as an additional performance index for the synthesis of classical WADCs for flexible ac transmission systems (FACTS) devices to damp inter-area oscillations. The proposed approach includes three parts: a geometric measure approach for selecting feedback remote signals, a residue method for designing phase-compensation parameters, and a Lyapunov stability criterion and linear matrix inequalities (LMI) for calculating the delay margin and determining the gain of the WADC based on a tradeoff between damping performance and delay margin. Three case studies are undertaken based on a four-machine two-area power system for demonstrating the design principle of the proposed approach, a New England ten-machine 39-bus power system and a 16-machine 68-bus power system for verifying the feasibility on larger and more complex power systems. The simulation results verify the effectiveness of the proposed approach on providing a balance between the delay margin and the damping performance.