Glycemic variation as an independent predictor of ischemic stroke in type 2 diabetic patients remains unclear. This study examined visit-to-visit variations in fasting plasma glucose (FPG), as ...represented by the coefficient of variation (CV), for predicting ischemic stroke independently, regardless of glycated hemoglobin (HbA1c) and other conventional risk factors in such patients.
Type 2 diabetic patients enrolled in the National Diabetes Care Management Program, ≥30 years old and free of ischemic stroke (n = 28,354) in 2002 to 2004 were included, and related factors were analyzed with extended Cox proportional hazards regression models of competing risk data on stroke incidence.
After an average 7.5 years of follow-up, there were 2,250 incident cases of ischemic stroke, giving a crude incidence rate of 10.56/1,000 person-years (11.64 for men, 9.63 for women). After multivariate adjustment, hazard ratios for the second, third and fourth versus first FPG-CV quartile were 1.11 (0.98, 1.25), 1.22 (1.08, 1.38) and 1.27 (1.12, 1.43), respectively, without considering HbA1c, and 1.09 (0.96, 1.23), 1.16 (1.03, 1.31) and 1.17 (1.03, 1.32), respectively, after considering HbA1c.
Besides HbA1c, FPG-CV was a potent predictor of ischemic stroke in type 2 diabetic patients, suggesting that different therapeutic strategies now in use be rated for their potential to (1) minimize glucose fluctuations and (2) reduce HbA1c level in type 2 diabetic patients to prevent ischemic stroke.
A cloud mashup is composed of multiple services with shared datasets and integrated functionalities. For example, the elastic compute cloud (EC2) provided by Amazon Web Service (AWS), the ...authentication and authorization services provided by Facebook, and the Map service provided by Google can all be mashed up to deliver real-time, personalized driving route recommendation service. To discover qualified services and compose them with guaranteed quality of service (QoS), we propose an integrated skyline query processing method for building up cloud mashup applications. We use a similarity test to achieve optimal localized skyline. This mashup method scales well with the growing number of cloud sites involved in the mashup applications. Faster skyline selection, reduced composition time, dataset sharing, and resources integration assure the QoS over multiple clouds. We experiment with the quality of Web service (QWS) benchmark over 10,000 Web services along six QoS dimensions. By utilizing block-elimination, data-space partitioning, and service similarity pruning, the skyline process is shortened by three times, when compared with two state-of-the-art methods.
Predicting grid performance is a complex task because heterogeneous resource nodes are involved in a distributed environment. Long execution workload on a grid is even harder to predict due to heavy ...load fluctuations. In this paper, we use Kalman filter to minimize the prediction errors. We apply Savitzky-Golay filter to train a sequence of confidence windows. The purpose is to smooth the prediction process from being disturbed by load fluctuations. We present a new adaptive hybrid method (AHModel) for load prediction guided by trained confidence windows. We test the effectiveness of this new prediction scheme with real-life workload traces on the AuverGrid and Grid5000 in France. Both theoretical and experimental results are reported in this paper. As the lookahead span increases from 10 to 50 steps (5 minutes per step), the AHModel predicts the grid workload with a mean-square error (MSE) of 0.04-0.73 percent, compared with 2.54-30.2 percent in using the static point value autoregression (AR) prediction method. The significant gain in prediction accuracy makes the new model very attractive to predict Grid performance. The model was proved especially effective to predict large workload that demands very long execution time, such as exceeding 4 hours on the Grid5000 over 5,000 processors. With minor changes of some system parameters, the AHModel can apply to other computational grids as well. At the end, we discuss extended research issues and tool development for Grid performance prediction.
This paper reports the design principles and evaluation results of a new experimental hybrid intrusion detection system (HIDS). This hybrid system combines the advantages of low false-positive rate ...of signature-based intrusion detection system (IDS) and the ability of anomaly detection system (ADS) to detect novel unknown attacks. By mining anomalous traffic episodes from Internet connections, we build an ADS that detects anomalies beyond the capabilities of signature-based SNORT or Bro systems. A weighted signature generation scheme is developed to integrate ADS with SNORT by extracting signatures from anomalies detected. HIDS extracts signatures from the output of ADS and adds them into the SNORT signature database for fast and accurate intrusion detection. By testing our HIDS scheme over real-life Internet trace data mixed with 10 days of Massachusetts Institute of Technology/Lincoln Laboratory (MIT/LL) attack data set, our experimental results show a 60 percent detection rate of the HIDS, compared with 30 percent and 22 percent in using the SNORT and Bro systems, respectively. This sharp increase in detection rate is obtained with less than 3 percent false alarms. The signatures generated by ADS upgrade the SNORT performance by 33 percent. The HIDS approach proves the vitality of detecting intrusions and anomalies, simultaneously, by automated data mining and signature generation over Internet connection episodes
ABSTRACTThe digital twin city technique maps the massive city environmental and social data on a three-dimensional virtual model. It presents the operational status of physical world and supports ...intelligent city governance. However, the inefficient utilization of distributed data resources, and the lack of sharing and collaboration among multiple departments restrict the data formulation of digital twin city construction. This research proposes a new cross-domain spatio-temporal data fusion framework for supporting complex urban governance. It integrates the heterogeneous urban information generated and stored by different government departments using multiple-information techniques. A specified geographic base reflecting the real city status is established, using geographical entities with unified address as identifiers to encapsulate the urban elements information. We introduce a comprehensive urban spatio-temporal data center construction process, which has already supported multiple urban governance projects. The two distinct advantages in using this data fusion system are: 1) The proposed Bert+PtrNet+ESIM-based address mapping method associates the urban elements information to their corresponding geographic entities with 99.3% F1-Score on real-world dataset. 2) The Wuhan spatio-temporal data center operation illustrates the capability of our framework for complex urban governance, which significantly improves the efficiency of urban management and services. This integrated system engineering provides reference and inspiration for further spatio-temporal data management, which contributes to the future social governance in digital twin city platform.
This study aimed to compare the efficacy of laser acupuncture (LA) treatment with that of placebo LA treatment in patients with idiopathic, mild-to-moderate carpal tunnel syndrome (CTS), as measured ...by subjective symptom assessments and objective changes in nerve conduction studies (NCSs).
A randomized, single-blinded, controlled study.
A Teaching Hospital in the Taichung, Taiwan between March 2013 and November 2013.
84 consecutive treatment-naive patients with CTS.
Participants were randomly divided into two treatment arms: (1) LA, administered at traditional Chinese acu-points on the affected side, once a day, 5 times a week, for 4 weeks (N = 43); and (2) placebo LA, administered using the same device and protocol, with the LA device switched off (N = 41).
Patients completed the Global symptom score (GSS) at baseline and two and four weeks later. The primary outcome was changes in GSS. NCSs were performed at baseline and repeated at the end of the study as a secondary outcome.
There was a significantly greater reduction in GSS in the LA group than in the placebo group at week 2 (-9.30 ± 4.94 vs. -2.29 ± 4.27, respectively, P < 0.01) and at week 4 (-10.67 ± 5.98 vs. -2.90 ± 5.61, respectively, P < 0.01). However, NCSs did not show significant difference between the two groups.
LA may be more effective than placebo LA in the treatment of mild-to-moderate idiopathic CTS in terms of subjective measurement. For patients who fear needle-based treatment, such as acupuncture or local injections, or those who do not opt for early surgical decompression, LA treatment can be considered as an effective and alternative form of acu-points stimulation therapy.
Artificial membrane permeability measurement is a potentially high throughput and low cost alternative for in vitro assessment of drug absorption potential. It will be an ideal screening/profiling ...tool in the lead generation program of drug discovery research if it is proven to be generally applicable for classifying drug absorption potential and is advantageous over other in vitro or in silico methods. This study provides an in-depth evaluation of the method in close comparison to Caco-2, Log
D, Log
P, polar surface area (PSA), and quantitative structure-property relationship (QSPR) predictions using a large and diverse compound set. It showed that the accuracy of using artificial membrane permeability in assessing drug absorption is comparable to Caco-2, but significantly better than Log
P, Log
D, PSA, and QSPR predictions. This study also explored the artificial membrane composition by adopting a hydrophilic filter membrane for artificial membrane (lecithin–dodecane) support. The use of hydrophilic filter membrane increased the rate of permeation significantly and reduced the transport time to 2 h or less as compared with over 10 h when a hydrophobic filter membrane is used.
In large-scale computational Grids, discovery of heterogeneous resources as a working group is crucial to achieving scalable performance. This paper presents a resource management scheme including a ...hierarchical cycloid overlay architecture, resource clustering and discovery algorithms for wide-area distributed Grid systems. We establish program/data locality by clustering resources based on their physical proximity and functional matching with user applications. We further develop dynamism-resilient resource management algorithm, cluster-token forwarding algorithm, and deadline-driven resource management algorithms. The advantage of the proposed scheme lies in low overhead, fast and dynamism-resilient multiresource discovery. The paper presents the scheme, new performance metrics, and experimental simulation results. This scheme compares favorably with other resource discovery methods in static and dynamic Grid applications. In particular, it supports efficient resource clustering, reduces communications cost, and enhances resource discovery success rate in promoting large-scale distributed supercomputing applications.
The scheduling of multitask jobs on clouds is an NP-hard problem. The problem becomes even worse when complex workflows are executed on elastic clouds, such as Amazon EC2 or IBM RC2. The main ...difficulty lies in the large search space and high overhead of generating optimal schedules, especially for real-time applications with dynamic workloads. In this work, a new iterative ordinal optimization (IOO) method is proposed. The ordinal optimization method is applied in each iteration to achieve sub-optimal schedules. IOO aims at generating more efficient schedules from a global perspective over a long period. We prove through overhead analysis the advantages in time and space efficiency in using the IOO method. The IOO method is designed to adapt to system dynamism to yield suboptimal performance. In cloud experiments on IBM RC2 cloud, we execute 20,000 tasks in LIGO (Laser Interferometer Gravitational-wave Observatory) verification workflow on 128 virtual machines. The IOO schedule is generated in less than 1,000 seconds, while using the Monte Carlo simulation takes 27.6 hours, 100 times longer to yield an optimal schedule. The IOO-optimized schedule results in a throughput of 1,100 tasks/sec with 7 GB memory demand, compared with 60 percent decrease in throughput and 70 percent increase in memory demand in using the Monte Carlo method. Our LIGO experimental results clearly demonstrate the advantage of using the IOO-based workflow scheduling over the traditional blind-pick, ordinal optimization, or Monte Carlo methods. These numerical results are also validated by the theoretical complexity and overhead analysis provided.
Top-down control underlies our ability to attend relevant stimuli while ignoring irrelevant, distracting stimuli and is a critical process for prioritizing information in working memory (WM). Prior ...work has demonstrated that top-down biasing signals modulate sensory-selective cortical areas during WM, and that the large-scale organization of the brain reconfigures due to WM demands alone; however, it is not yet understood how brain networks reconfigure between the processing of relevant versus irrelevant information in the service of WM.
Here, we investigated the effects of task goals on brain network organization while participants performed a WM task that required participants to detect repetitions (e.g., 0-back or 1-back) and had varying levels of visual interference (e.g., distracting, irrelevant stimuli). We quantified changes in network modularity-a measure of brain sub-network segregation-that occurred depending on overall WM task difficulty as well as trial-level task goals for each stimulus during the task conditions (e.g., relevant or irrelevant).
First, we replicated prior work and found that whole-brain modularity was lower during the more demanding WM task conditions compared to a baseline condition. Further, during the WM conditions with varying task goals, brain modularity was selectively lower during goal-directed processing of task-relevant stimuli to be remembered for WM performance compared to processing of distracting, irrelevant stimuli. Follow-up analyses indicated that this effect of task goals was most pronounced in default mode and visual sub-networks. Finally, we examined the behavioral relevance of these changes in modularity and found that individuals with lower modularity for relevant trials had faster WM task performance.
These results suggest that brain networks can dynamically reconfigure to adopt a more integrated organization with greater communication between sub-networks that supports the goal-directed processing of relevant information and guides WM.