In the 21st century, we live in a world packed with closed-circuit video cameras, facial recognition systems, radio frequency identification chips, electronic toll collectors, smartphones with ...location tracking, and widespread monitoring of our electronic communications. As deeply as the industrial revolution upended 20th century social norms and political structures, so too has modern information technology been a revolution, giving governments and large private corporations vast power to keep track of, manipulate, and potentially repress entire populations. China offers some examples in this area, but even democratically elected governments have shown a tendency to want to digitally profile and analyze their citizens without sufficient respect for individual privacy. And just as rampant industrialization had to be reined in to protect human rights and individual dignity, information technology and digital systems must be controlled to prevent abuse and exploitation. This "boundary setting" can come none too soon, as rapid advances in artificial intelligence allow an even greater ability to process and categorize the vast amounts of data generated by all our electronic devices.
Autoencoders are used to compress data and learn how to reliably reconstruct it. Through comparing the reconstruction error with a set threshold, they are able to detect anomalies in unseen datasets ...where the data does not quite match the reconstructed input samples. In this work, we attempt to investigate the use of convolutional autoencoders in the field of visual quality inspection, where images of formed sheet metals from a real production line are inspected for the occurrence of cracks and wrinkle formation. This approach tackles the problem of needing enough defective samples to attain reliable detection accuracies.
Background
Prolonged length of stay (LOS) and post‐acute care after percutaneous coronary intervention (PCI) is common and costly. Risk models for predicting prolonged LOS and post‐acute care have ...limited accuracy. Our goal was to develop and validate models using artificial neural networks (ANN) to predict prolonged LOS > 7days and need for post‐acute care after PCI.
Methods
We defined prolonged LOS as ≥7 days and post‐acute care as patients discharged to: extended care, transitional care unit, rehabilitation, other acute care hospital, nursing home or hospice care. Data from 22 675 patients who presented with ACS and underwent PCI was shuffled and split into a derivation set (75% of dataset) and a validation dataset (25% of dataset). Calibration plots were used to examine the overall predictive performance of the MLP by plotting observed and expected risk deciles and fitting a lowess smoother to the data. Classification accuracy was assessed by a receiver‐operating characteristic (ROC) and area under the ROC curve (AUC).
Results
Our MLP‐based model predicted prolonged LOS with an accuracy of 90.87% and 88.36% in training and test sets, respectively. The post‐acute care model had an accuracy of 90.22% and 86.31% in training and test sets, respectively. This accuracy was achieved with quick convergence. Predicted probabilities from the MLP models showed good (prolonged LOS) to excellent calibration (post‐acute care).
Conclusions
Our ANN‐based models accurately predicted LOS and need for post‐acute care. Larger studies for replicability and longitudinal studies for evidence of impact are needed to establish these models in current PCI practice.
This research is of great importance because it applies artificial intelligence methods, more specifically the Random Forest algorithm and the Anfis method to research the key factors that influence ...the success of students in vocational schools. Identifying these influencing factors is not only useful for improving curriculum and practice but also provides valuable guidance to help students master the material more effectively. The main goal of this research is to penetrate deeply into the core of the factors that influence the success of students in vocational schools, using two different methods. Each of the factors represented as input is mutually independent and does not affect each other, but each of them affects the output variable. The parameters considered as input variables are prior programming knowledge and pretest requirements. Then, by finding one factor that has the greatest influence, the factor of pre-exam obligation was investigated in more detail, using the Anfis method, which was broken down into several input parameters. These results emphasize the importance of the combination of the Random Forest algorithm and the ANFIS method in the statistical evaluation and assessment of student achievement in vocational schools. This study provides useful guidelines for improving education and practice in vocational schools to optimize educational outcomes.
Software Engineering for Machine Learning: A Case Study Amershi, Saleema; Begel, Andrew; Bird, Christian ...
2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP),
05/2019
Conference Proceeding
Recent advances in machine learning have stimulated widespread interest within the Information Technology sector on integrating AI capabilities into software and services. This goal has forced ...organizations to evolve their development processes. We report on a study that we conducted on observing software teams at Microsoft as they develop AI-based applications. We consider a nine-stage workflow process informed by prior experiences developing AI applications (e.g., search and NLP) and data science tools (e.g. application diagnostics and bug reporting). We found that various Microsoft teams have united this workflow into preexisting, well-evolved, Agile-like software engineering processes, providing insights about several essential engineering challenges that organizations may face in creating large-scale AI solutions for the marketplace. We collected some best practices from Microsoft teams to address these challenges. In addition, we have identified three aspects of the AI domain that make it fundamentally different from prior software application domains: 1) discovering, managing, and versioning the data needed for machine learning applications is much more complex and difficult than other types of software engineering, 2) model customization and model reuse require very different skills than are typically found in software teams, and 3) AI components are more difficult to handle as distinct modules than traditional software components - models may be "entangled" in complex ways and experience non-monotonic error behavior. We believe that the lessons learned by Microsoft teams will be valuable to other organizations.
“Hey AI, show me the money!” Sure, the impact of AI on industries and society is huge. However, we are still waiting for that steady stream of lucrative success stories in industrial artificial ...intelligence (AI).
Artificial intelligence (AI) using deep learning is revolutionizing several fields, including medicine, with a wide range of applications. Available since the end of 2022, ChatGPT is a conversational ...AI or "chatbot", using artificial intelligence to dialogue with its users in all fields. Through the example of hydroxychloroquine (HCQ), we discuss its use for patients, clinicians, or researchers, and discuss its performance and limitations, particularly in relation to algorithmic bias. If AI tools using deep learning do not dispense with the expertise and experience of a clinician (at least, for the moment), they have a potential to improve or simplify our daily practice.
The Corona Virus Disease 2019 (COVID-19) pandemic has taught us many valuable lessons regarding the importance of our physical and mental health. Even with so many technological advancements, we ...still lag in developing a system that can fully digitalize the medical data of each individual and make it readily accessible for both the patient and health worker at any point in time. Moreover, there are also no ways for the government to identify the legitimacy of a particular clinic. This study merges modern technology with traditional approaches, thereby highlighting a scenario where artificial intelligence (AI) merges with traditional Chinese medicine (TCM), proposing a way to advance the conventional approaches. The main objective of our research is to provide a one-stop platform for the government, doctors, nurses, and patients to access their data effortlessly. The proposed portal will also check the doctors’ authenticity. Data is one of the most critical assets of an organization, so a breach of data can risk users' lives. Data security is of primary importance and must be prioritized. The proposed methodology is based on cloud computing technology which assures the security of the data and avoids any kind of breach. The study also accounts for the difficulties encountered in creating such an infrastructure in the cloud and overcomes the hurdles faced during the project, keeping enough room for possible future innovations. To summarize, this study focuses on the digitalization of medical data and suggests some possible ways to achieve it. Moreover, it also focuses on some related aspects like security and potential digitalization difficulties.
Artificial Intelligence in Oncological Hybrid Imaging Feuerecker, Benedikt; Heimer, Maurice M; Geyer, Thomas ...
RöFo : Fortschritte auf dem Gebiet der Röntgenstrahlen und der bildgebende Verfahren,
02/2023, Letnik:
195, Številka:
2
Journal Article
Recenzirano
Odprti dostop
Artificial intelligence (AI) applications have become increasingly relevant across a broad spectrum of settings in medical imaging. Due to the large amount of imaging data that is generated in ...oncological hybrid imaging, AI applications are desirable for lesion detection and characterization in primary staging, therapy monitoring, and recurrence detection. Given the rapid developments in machine learning (ML) and deep learning (DL) methods, the role of AI will have significant impact on the imaging workflow and will eventually improve clinical decision making and outcomes.
The first part of this narrative review discusses current research with an introduction to artificial intelligence in oncological hybrid imaging and key concepts in data science. The second part reviews relevant examples with a focus on applications in oncology as well as discussion of challenges and current limitations.
AI applications have the potential to leverage the diagnostic data stream with high efficiency and depth to facilitate automated lesion detection, characterization, and therapy monitoring to ultimately improve quality and efficiency throughout the medical imaging workflow. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based therapy guidance in oncology. However, significant challenges remain regarding application development, benchmarking, and clinical implementation.
· Hybrid imaging generates a large amount of multimodality medical imaging data with high complexity and depth.. · Advanced tools are required to enable fast and cost-efficient processing along the whole radiology value chain.. · AI applications promise to facilitate the assessment of oncological disease in hybrid imaging with high quality and efficiency for lesion detection, characterization, and response assessment. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based oncological therapy guidance.. · Selected applications in three oncological entities (lung, prostate, and neuroendocrine tumors) demonstrate how AI algorithms may impact imaging-based tasks in hybrid imaging and potentially guide clinical decision making..
· Feuerecker B, Heimer M, Geyer T et al. Artificial Intelligence in Oncological Hybrid Imaging. Fortschr Röntgenstr 2023; 195: 105 - 114.
The plate objective scoring tool (POST) was recently introduced as a reproducible and precise approach to quantifying urethral plate (UP) characteristics and guide to selecting particular surgical ...techniques. However, defining the landmarks mandatory for the POST score from captured images can potentially leads to variability. Although artificial intelligence (AI) is yet to be wholly accepted and explored in hypospadiology, it has certainly brought new possibilities to light.
To explore the capacity of deep learning algorithm to further streamline and optimize UP characteristics appraisal on 2D images using the POST, aiming to increase the objectivity and reproducibility of UP appraisal in hypospadias repair.
The five key POST landmarks were marked by specialists in a 691-image dataset of prepubertal boys undergoing primary hypospadias repair. This dataset was then used to develop and validate a deep learning-based landmark detection model. The proposed framework begins with glans localization and detection, where the input image is cropped using the predicted bounding box. Next, a deep convolutional neural network (CNN) architecture is used to predict the coordinates of the five POST landmarks. These predicted landmarks are then used to assess UP characteristics in distal hypospadias.
The proposed model accurately localized the glans area, with a mean average precision (mAP) of 99.5% and an overall sensitivity of 99.1%. A normalized mean error (NME) of 0.07152 was achieved in predicting the coordinates of the landmarks, with a mean squared error (MSE) of 0.001 and a 2.5% failure rate at a threshold of 0.2 NME.
Our results support the possibility of further standardizing UP assessment from captured hypospadias images, and the use of machine learning algorithms and image recognition shows that these novel artificial intelligence technologies are useful for scoring hypospadias. External validation can provide valuable information on the generalizability and reliability of deep learning algorithms, which can aid in assessments, decision-making and predictions for surgical outcomes.
This deep learning application shows robustness and high precision in using POST to appraise UP characteristics. Further assessment using international multi-centre image-based databases is ongoing.