Flexible AC transmission systems, so-called FACTS devices, can help reduce power flow on overloaded lines, which would result in an increased loadability of the power system, fewer transmission line ...losses, improved stability and security and, ultimately, a more energy-efficient transmission system. In order to find suitable FACTS locations more easily and with more flexibility, this paper presents a graphical user interface (GUI) based on a genetic algorithm (GA) which is shown able to find the optimal locations and sizing parameters of multi-type FACTS devices in large power systems. This user-friendly tool, called the FACTS Placement Toolbox, allows the user to pick a power system network, determine the GA settings and select the number and types of FACTS devices to be allocated in the network. The GA-based optimization process is then applied to obtain optimal locations and ratings of the selected FACTS to maximize the system static loadability. Five different FACTS devices are implemented: SVC, TCSC, TCVR, TCPST and UPFC. The simulation results on IEEE test networks with up to 300 buses show that the FACTS placement toolbox is effective and flexible enough for analyzing a large number of scenarios with mixed types of FACTS to be optimally sited at multiple locations simultaneously.
Graphical User Interface (GUI) provides a visual bridge between a software application and end users, through which they can interact with each other. With the upgrading of mobile devices and the ...development of aesthetics, the visual effects of the GUI are more and more attracting, and users pay more attention to the accessibility and usability of applications. However, such GUI complexity posts a great challenge to the GUI implementation. According to our pilot study of crowdtesting bug reports, display issues such as text overlap, component occlusion, missing image always occur during GUI rendering on different devices due to the software or hardware compatibility. They negatively influence the app usability, resulting in poor user experience. To detect these issues, we propose a fully automated approach, Nighthawk , based on deep learning for modelling visual information of the GUI screenshot. Nighthawk can detect GUIs with display issues and also locate the detailed region of the issue in the given GUI for guiding developers to fix the bug. At the same time, training the model needs a large amount of labeled buggy screenshots, which requires considerable manual effort to prepare them. We therefore propose a heuristic-based training data auto-generation method to automatically generate the labeled training data. The evaluation demonstrates that our Nighthawk can achieve average 0.84 precision and 0.84 recall in detecting UI display issues, average 0.59 AP and 0.60 AR in localizing these issues. We also evaluate Nighthawk with popular Android apps on Google Play and F-Droid, and successfully uncover 151 previously-undetected UI display issues with 75 of them being confirmed or fixed so far.
MXCuBE2: the dawn of MXCuBE Collaboration Oscarsson, Marcus; Beteva, Antonia; Flot, David ...
Journal of synchrotron radiation,
March 2019, Letnik:
26, Številka:
2
Journal Article
Recenzirano
Odprti dostop
MXCuBE2 is the second‐generation evolution of the MXCuBE beamline control software, initially developed and used at ESRF – the European Synchrotron. MXCuBE2 extends, in an intuitive graphical user ...interface (GUI), the functionalities and data collection methods available to users while keeping all previously available features and allowing for the straightforward incorporation of ongoing and future developments. MXCuBE2 introduces an extended ion layer that allows easy interfacing of any kind of macromolecular crystallography (MX) hardware component, whether this is a diffractometer, sample changer, detector or optical element. MXCuBE2 also works in strong synergy with the ISPyB Laboratory Information Management System, accessing the list of samples available for a particular experimental session and associating, either from instructions contained in ISPyB or from user input via the MXCuBE2 GUI, different data collection types to them. The development of MXCuBE2 forms the core of a fruitful collaboration which brings together several European synchrotrons and a software development factory and, as such, defines a new paradigm for the development of beamline control platforms for the European MX user community.
The collaboration for the development of MXCuBE2 control software for Macromolecular Crystallography beamlines is described
From UI design image to GUI skeleton Chen, Chunyang; Su, Ting; Meng, Guozhu ...
2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE),
05/2018
Conference Proceeding
A GUI skeleton is the starting point for implementing a UI design image. To obtain a GUI skeleton from a UI design image, developers have to visually understand UI elements and their spatial layout ...in the image, and then translate this understanding into proper GUI components and their compositions. Automating this visual understanding and translation would be beneficial for bootstraping mobile GUI implementation, but it is a challenging task due to the diversity of UI designs and the complexity of GUI skeletons to generate. Existing tools are rigid as they depend on heuristically-designed visual understanding and GUI generation rules. In this paper, we present a neural machine translator that combines recent advances in computer vision and machine translation for translating a UI design image into a GUI skeleton. Our translator learns to extract visual features in UI images, encode these features' spatial layouts, and generate GUI skeletons in a unified neural network framework, without requiring manual rule development. For training our translator, we develop an automated GUI exploration method to automatically collect large-scale UI data from real-world applications. We carry out extensive experiments to evaluate the accuracy, generality and usefulness of our approach.
Purpose:
While Monte Carlo particle transport has proven useful in many areas (treatment head design, dose calculation, shielding design, and imaging studies) and has been particularly important for ...proton therapy (due to the conformal dose distributions and a finite beam range in the patient), the available general purpose Monte Carlo codes in proton therapy have been overly complex for most clinical medical physicists. The learning process has large costs not only in time but also in reliability. To address this issue, we developed an innovative proton Monte Carlo platform and tested the tool in a variety of proton therapy applications.
Methods:
Our approach was to take one of the already-established general purpose Monte Carlo codes and wrap and extend it to create a specialized user-friendly tool for proton therapy. The resulting tool, TOol for PArticle Simulation (TOPAS), should make Monte Carlo simulation more readily available for research and clinical physicists. TOPAS can model a passive scattering or scanning beam treatment head, model a patient geometry based on computed tomography (CT) images, score dose, fluence, etc., save and restart a phase space, provides advanced graphics, and is fully four-dimensional (4D) to handle variations in beam delivery and patient geometry during treatment. A custom-designed TOPAS parameter control system was placed at the heart of the code to meet requirements for ease of use, reliability, and repeatability without sacrificing flexibility.
Results:
We built and tested the TOPAS code. We have shown that the TOPAS parameter system provides easy yet flexible control over all key simulation areas such as geometry setup, particle source setup, scoring setup, etc. Through design consistency, we have insured that user experience gained in configuring one component, scorer or filter applies equally well to configuring any other component, scorer or filter. We have incorporated key lessons from safety management, proactively removing possible sources of user error such as line-ordering mistakes. We have modeled proton therapy treatment examples including the UCSF eye treatment head, the MGH stereotactic alignment in radiosurgery treatment head and the MGH gantry treatment heads in passive scattering and scanning modes, and we have demonstrated dose calculation based on patient-specific CT data. Initial validation results show agreement with measured data and demonstrate the capabilities of TOPAS in simulating beam delivery in 3D and 4D.
Conclusions:
We have demonstrated TOPAS accuracy and usability in a variety of proton therapy setups. As we are preparing to make this tool freely available for researchers in medical physics, we anticipate widespread use of this tool in the growing proton therapy community.
Objective: Nonmanual human-machine interfaces (HMIs) have been studied for wheelchair control with the aim of helping severely paralyzed individuals regain some mobility. The challenge is to rapidly, ...accurately, and sufficiently produce control commands, such as left and right turns, forward and backward motions, acceleration, deceleration, and stopping. In this paper, a novel electrooculogram (EOG) based HMI is proposed for wheelchair control. Methods: A total of 13 flashing buttons, each of which corresponds to a command, are presented in the graphical user interface. These buttons flash on a one-by-one manner in a predefined sequence. The user can select a button by blinking in sync with its flashes. The algorithm detects the eye blinks from a channel of vertical EOG data and determines the user's target button based on the synchronization between the detected blinks and the button's flashes. Results: For healthy subjects/patients with spinal cord injuries, the proposed HMI achieved an average accuracy of 96.7% / 91.7% and a response time of 3.53 s/3.67 s with 0 false positive rates (FPRs). Conclusion: Using one channel of vertical EOG signals associated with eye blinks, the proposed HMI can accurately provide sufficient commands with a satisfactory response time. Significance: The proposed HMI provides a novel nonmanual approach for severely paralyzed individuals to control a wheelchair. Compared with a newly established EOG-based HMI, the proposed HMI can generate more commands with higher accuracy, lower FPR, and fewer electrodes.
General purpose graphical interfaces for data exploration are typically based on manual visualization and interaction specifications. While designing manual specification can be very expressive, it ...demands high efforts to make effective decisions, therefore reducing exploratory speed. Instead, principled automated designs can increase exploratory speed, decrease learning efforts, help avoid ineffective decisions, and therefore better support data analytics novices. Towards these goals, we present Keshif, a new systematic design for tabular data exploration. To summarize a given dataset, Keshif aggregates records by value within attribute summaries, and visualizes aggregate characteristics using a consistent design based on data types. To reveal data distribution details, Keshif features three complementary linked selections: highlighting, filtering, and comparison. Keshif further increases expressiveness through aggregate metrics, absolute/part-of scale modes, calculated attributes, and saved selections, all working in synchrony. Its automated design approach also simplifies authoring of dashboards composed of summaries and individual records from raw data using fluid interaction. We show examples selected from <inline-formula> <tex-math notation="LaTeX">160+ </tex-math> <inline-graphic xlink:href="yalcin-ieq1-2723393.gif"/> </inline-formula> datasets from diverse domains. Our study with novices shows that after exploring raw data for 15 minutes, our participants reached close to 30 data insights on average, comparable to other studies with skilled users using more complex tools.
is a lightweight graphical user interface (GUI) for the
,
and
program packages that serves both novice and experienced users in obtaining optimal processing and phasing results for X-ray, neutron and ...electron diffraction data. The design of the program enables data processing and phasing without command line usage, and supports advanced command flows in a simple user-modifiable and user-extensible way. The GUI supplies graphical information based on the tabular log output of the programs, which is more intuitive, comprehensible and efficient than text output can be.
Objective: A challenging task for an electroencephalography (EEG)-based asynchronous brain-computer interface (BCI) is to effectively distinguish between the idle state and the control state while ...maintaining a short response time and a high accuracy when commands are issued in the control state. This study proposes a novel hybrid asynchronous BCI system based on a combination of steady-state visual evoked potentials (SSVEPs) in the EEG signal and blink-related electrooculography (EOG) signals. Methods: Twelve buttons corresponding to 12 characters are included in the graphical user interface (GUI). These buttons flicker at different fixed frequencies and phases to evoke SSVEPs and are simultaneously highlighted by changing their sizes. The user can select a character by focusing on its frequency-phase stimulus and simultaneously blinking his/her eyes in accordance with its highlighting as his/her EEG and EOG signals are recorded. A multifrequency band-based canonical correlation analysis (CCA) method is applied to the EEG data to detect the evoked SSVEPs, whereas the EOG data are analyzed to identify the user's blinks. Finally, the target character is identified based on the SSVEP and blink detection results. Results: Ten healthy subjects participated in our experiments and achieved an average information transfer rate (ITR) of 105.52 bits/min, an average accuracy of 95.42%, an average response time of 1.34 s and an average false-positive rate (FPR) of 0.8%. Conclusion: The proposed BCI generates multiple commands with a high ITR and low FPR. Significance: The hybrid asynchronous BCI has great potential for practical applications in communication and control.
Although the development and widespread adoption of software bots has occurred in just a few years, bots have taken on many diverse tasks and roles. This article discusses current bot technology and ...presents a practical case study on how to use bots in software engineering.