Purpose:
To investigate the repeatability of reduced field‐of‐view diffusion‐weighted imaging (rFOV DWI) in quantifying apparent diffusion coefficients (ADCs) for human thyroid glands in a clinical ...setting.
Methods:
Nine healthy human volunteers were enrolled and underwent 3T MRI exams. For each volunteer, 3 longitudinal exams (2 weeks apart) with 2 repetitive sessions within each exam, including rFOV and conventional full field‐of‐view (fFOV) DWI scans, were performed. In the acquired DWI images, a fixed‐size region of interest (ROI; diameter=8mm) was placed on thyroid glands to calculate ADC. ADC was calculated using a monoexponential function with a noise correction scheme. The repeatability of ADC was assessed by using coefficient variation (CV) across sessions or exams, which was defined to be: r = 1‐CV, 0 < r < 1, where CV=STD/m, STD is the standard deviation of ADC, and m is the average of ADC across sessions or exams. An experienced radiologist assessed and scored rFOV and fFOV DW images based on image characteristics (1, nondiagnostic; 2, poor; 3, satisfactory; 4, good; and 5, excellent).Analysis of variance (ANOVA) was performed to compare ADC values, CV of ADC, repeatability of ADC across sessions and exams, and radiologic scores between rFOV and fFOV DWI techniques.
Results:
There was no significant difference in ADC values across sessions and exams either in rFOV or fFOV DWI. The average CVs of both rFOV and fFOV DWI were less than 13%. The repeatability of ADC measurement between rFOV and fFOV DWI was not significantly different. The overall image quality was significantly higher with rFOV DWI than with fFOV DWI.
Conclusion:
This study suggested that ADCs from both rFOV and fFOV DWI were repeatable, but rFOV DWI had superior imaging quality for human thyroid glands in a clinical setting.
The goal of radiation therapy is to deliver a therapeutic dose of radiation to target tissues while minimizing the risks of normal tissue complications. Until recently, the quality of a radiation ...treatment plan has been judged by physical quantities, i.e., dose and dose‐volume parameters, thought to correlate with biological response rather than by estimates of the biological outcome itself. Developments in our understanding of advantages and limitations of existing dose‐response models begin to allow the incorporation of biological concepts and outcome data into treatment planning process. Any use of dose‐response (outcome) models that involves feedback from a model during the treatment planning process is referred here as biologically based treatment planning, which aims to design dose distributions that would produce the desired balance between tumor cure and normal tissue injury based on the knowledge of biological properties of the particular tumor and surrounding normal tissues. Such a multidimensional problem is most appropriately addressed in the framework of inverse treatment planning presently employed for the optimization of IMRT plans and will rely on models to describe relationships between dose distributions and biological outcomes. The feedback may be either passive/automated in the case of inverse treatment planning, or with active participation from the planner in the case of forward treatment planning. Treatment planning tools that use biologically‐related models for plan optimization and/or evaluation are being introduced for clinical use. However, due to factors such as the limitations of models and available model parameters, the incomplete understanding of dose responses, and the inadequate clinical data, the use of biologically‐based treatment planning system represents a paradigm shift and can be potentially dangerous. There will be a steep learning curve for most planners. This presentation will discuss various practical issues including (1) dosimetrical differences between biologically based and physical dose (dose‐volume) based treatment plan optimization and evaluation, and (2) general QA methodology for using biologically‐based models in treatment planning systems.
Learning Objectives:
1. Understand differences between the use of biologically‐based and dose‐volume based treatment plan optimization and evaluation;
2. Understand general QA methodology of a biologically based planning system.
Purpose: The cell‐kill‐based uniform dose (cEUD) formula accounts for varying tumor volume and dose variability in the tumor, has been shown to be highly correlated with local control (LC) in head ...and neck (H&N) and non‐small‐cell lung (NSCLC) cancer datasets. However, previous fits resulted in high fitted surviving fractions at 2 Gy (SF2 ∼0.8), which is not radiobiologically credible. The purpose of this work is to apply a modification to the cEUD equation to more realistically model the tumor volume effect, while obtaining radiobiologically meaningful estimates of SF2. Methods: We propose a modification of Niemierko's formula, such that SF2 increases linearly with tumor volume, normalized to an arbitrary reference tumor size: SF2(1+ k(VT/Vref)). The resulting proportionality constant was fitted using outcome data. We modeled two different datasets collected at Washington University in Saint Louis: (A) 56 NSCLC patients who received 3D conformal radiotherapy with a median prescription dose of 70 Gy (60‐84 Gy) and a median follow‐up of 32 months; (B) 80 H&N squamous cell carcinoma patients who received definitive IMRT, with a median prescription dose of 70 Gy (66–72 Gy) and a median follow‐up of 19 months. We tested correlation with LC using the area under the receiver operating characteristic curve (AUC). Results: Using a proportionality constant of k=0.05, and Vref=10 cc, we obtained high correlations with outcome for both datasets (AUC_lung= 0.729; AUC H&N= 0.758), while keeping SF2 at a meaningful value (<=0.5). However, AUC values did not significantly increase compared to the simpler model. Conclusions: Introducing this modification into SF2 to account for increasing radioresistance with increasing tumor volume, led to comparable correlations of cEUD with LC, while allowing for a more reasonable range of SF2 values. This modified model is expected to make more realistic predictions concerning the effects of cold‐spots or hot‐spots than the unmodified model. Partially supported by NIH R01 grant CA85181
Purpose: Organic liquid scintillators are currently under investigation for use in proton dosimetry. The purpose of this work is to evaluate the water equivalence of these materials as a preliminary ...step to identify scintillators that are well‐suited to this purpose. Methods: Stopping powers were calculated for 0.001–1000 MeV protons in water, polystyrene, and two organic liquid scintillators: BC‐531 and OptiPhase ‘Hi‐Safe’ 3 at 0%, 25%, and 50% concentrations of water. Angular scatter was quantified by theta0, a characteristic multiple Coulomb scattering angle analogous to the standard deviation of a Gaussian distribution of proton angles relative to the incident beam axis. Theta0 was calculated as a function of depth over the range of 200 MeV protons in these materials. Results: Collisional stopping power in BC‐531 ranged from +44% to +1% ofthat in water. It remained within 6% from 2–600 MeV. OptiPhase ranged from +24% to −2%, with smaller deviations at increased water concentrations. At all concentrations, OptiPhase showed smaller deviations than polystyrene and BC‐531 and remained within 1% of water from 2–600 MeV.Theta0 was very similar for all materials, with deviations from water of 5 milliradians or less over the majority of the proton range. BC‐531 showed deviations of 10 milliradians or more in the last few millimeters of the range. OptiPhase showed smaller deviations than BC‐531 or polystyrene, and these deviations decreased with increasing water concentration. Conclusions: OptiPhase was found to be more water equivalent than BC‐531 or polystyrene in stopping power and angular scatter, and increased water concentration improved both quantities. Large deviations in stopping power were only found below 2 MeV for any material, where proton range is less than 0.1 millimeter. The deviations from water found in angular scatter were less significant, and probably too small to affect measurement.
Purpose: To evaluate the characteristics of commercial‐grade flatbed scanners and medical‐grade scanners for radiochromic EBT film dosimetry. Methods: Performance aspects of a Vidar Dosimetry Pro ...Advantage (Red), Epson 750 Pro, Microtek ArtixScan 1800f, and Microtek ScanMaker 8700 scanner for EBT2 Gafchromic film were evaluated in the categories of repeatability, maximum distinguishable optical density (OD) differentiation, OD variance, and dose curve characteristics. OD step film by Stouffer Industries containing 31 steps ranging from 0.05 to 3.62 OD was used. EBT films were irradiated with dose ranging from 20 to 600 cGy in 6×6 cm2 field sizes and analyzed 24 hours later using RIT113 and Tomotherapy Film Analyzer software. Scans were performed in transmissive mode, landscape orientation, 16‐bit image. The mean and standard deviation Analog to Digital (A/D) scanner value was measured by selecting a 3×3 mm2 uniform area in the central region of each OD step from a total of 20 scans performed over several weeks. Repeatability was determined from the variance of OD step 0.38. Maximum distinguishable OD was defined as the last OD step whose range of A/D values does not overlap with its neighboring step. Results: Repeatability uncertainty ranged from 0.1% for Vidar to 4% for Epson. Average standard deviation of OD steps ranged from 0.21% for Vidar to 6.4% for ArtixScan 1800f. Maximum distinguishable optical density ranged from 3.38 for Vidar to 1.32 for ScanMaker 8700. A/D range of each OD step corresponds to a dose range. Dose ranges of OD steps varied from 1% for Vidar to 20% for ScanMaker 8700. Conclusions: The Vidar exhibited a dose curve that utilized a broader range of OD values than the other scanners. Vidar exhibited higher maximum distinguishable OD, smaller variance in repeatability, smaller A/D value deviation per OD step, and a shallower dose curve with respect to OD.
Purpose: Radiologists may need to decide which type of image procedure is most appropriate for a particular patient. One factor relevant in making this decision is the relative risk of secondary ...cancers due to each relevant procedure. Differences in the risk posed by each method are not just due to the total radiation dose imparted by each procedure, but also the distribution of absorbed dose across various organs in the imaging procedure. Two imaging procedures with the same total radiation dose may pose different risks of differential sensitivity to radiation across organs. Methods: New methods of radiation dosimetry enable us to estimate the dose distribution across organs in individual patients. We propose a measure of the relative risk of two medical imaging procedures derived from the hazard function of cancer incidence. The relative risk measure is shown to approximately equal to a weighted sum of the dose difference in each organ. The weights are proportional to organ specific incidence rates. The measure is also sensitive to factors such as the patient's age at exposure to radiation, the attained age and gender, as well as the incidence characteristics of the population to which the patient belongs. We propose to quantify the effects of these factors using information from publically available SEER database for US based patients as well as the LSS study of atomic bomb survivors. The method is illustrated by application to a study comparing chest and abdominal CT scans for a group of pediatric patients. Results: Fig. 1 shows higher absolute relative risk for those exposed at younger ages, with chest scans being riskier for females while abdominal scans were riskier for males. At higher ages, the relative risk is approximately equal. Conclusions: Relative risks can quantify risks comparisons between imaging procedures.
The statistical properties were studied of normal strength hull structural steel plates (with a strength level of 235) of a thickness of 50 mm that were manufactured in 1990-1991 at four European ...steelworks. The study focused on the yield strength, as the available literary sources indicate that in terms of structural strength and design assessment this particular aspect is the most essential property of steel used for ship structures, and, at the same time, its values are characterized by the highest variability. Detailed statistical tests were performed on nearly 2,200 plates. The results of these analyses suggested that for the sample examined herein the mean yield strength value was 308.3 MPa, the standard deviation was 25.24 MPa, the coefficient of variation was 0.0819 (8.19%), the (average) difference bias was 73.3 MPa and the (average) ratio bias was 1.31. It has been shown that both lognormal distribution LN(5.7276, 0.0817) and normal distribution N(308.2589, 25.2444) provide an adequate representation of normal strength hull structural steel plates.
Purpose:
A method to refine the implementation of an in vivo, adaptive proton therapy range verification methodology was investigated. Simulation experiments and in-phantom measurements were compared ...to validate the calibration procedure of a time-resolved diode dosimetry technique.
Methods:
A silicon diode array system has been developed and experimentally tested in phantom for passively scattered proton beam range verification by correlating properties of the detector signal to the water equivalent path length (WEPL). The implementation of this system requires a set of calibration measurements to establish a beam-specific diode response to WEPL fit for the selected ‘scout’ beam in a solid water phantom. This process is both tedious, as it necessitates a separate set of measurements for every ‘scout’ beam that may be appropriate to the clinical case, as well as inconvenient due to limited access to the clinical beamline. The diode response to WEPL relationship for a given ‘scout’ beam may be determined within a simulation environment, facilitating the applicability of this dosimetry technique. Measurements for three ‘scout’ beams were compared against simulated detector response with Monte Carlo methods using the Tool for Particle Simulation (TOPAS).
Results:
Detector response in water equivalent plastic was successfully validated against simulation for spread out Bragg peaks of range 10 cm, 15 cm, and 21 cm (168 MeV, 177 MeV, and 210 MeV) with adjusted R2 of 0.998.
Conclusion:
Feasibility has been shown for performing calibration of detector response for a given ‘scout’ beam through simulation for the time resolved diode dosimetry technique.
We present an introductory overview of several challenging problems in the statistical characterization of turbulence. We provide examples from fluid turbulence in three and two dimensions, from the ...turbulent advection of passive scalars, turbulence in the one-dimensional Burgers equation, and fluid turbulence in the presence of polymer additives.
High-mobility wireless communication systems have attracted growing interests in recent years. For the deployment of these systems, one fundamental work is to build accurate and efficient channel ...models. In high-mobility scenarios, it has been shown that the standardized channel models, e.g., IMT-Advanced (IMT-A) multiple-input multiple-output (MIMO) channel model, provide noticeable longer stationary intervals than measured results and the wide-sense stationary (WSS) assumption may be violated. Thus, the non-stationarity should be introduced to the IMT-A MIMO channel model to mimic the channel characteristics more accurately without losing too much efficiency. In this paper, we analyze and compare the computational complexity of the original WSS and non-stationary IMT-A MIMO channel models. Both the number of real operations and simulation time are used as complexity metrics. Since introducing the non-stationarity to the IMT-A MIMO channel model causes extra computational complexity, some computation reduction methods are proposed to simplify the non-stationary IMT-A MIMO channel model while retaining an acceptable accuracy. Statistical properties including the temporal autocorrelation function, spatial cross-correlation function, and stationary interval are chosen as the accuracy metrics for verifications. It is shown that the tradeoff between the computational complexity and modeling accuracy can be achieved by using these proposed complexity reduction methods.