Computed tomography (CT) is one of the most important medical imaging technologies in use today. Most commercial CT products use a technique known as the filtered backprojection (FBP) that is fast ...and can produce decent image quality when an X-ray dose is high. However, the FBP is not good enough on low-dose X-ray CT imaging because the CT image reconstruction problem becomes more stochastic. A more effective reconstruction technique proposed recently and implemented in a limited number of CT commercial products is an iterative reconstruction (IR). The IR technique is based on a Bayesian formulation of the CT image reconstruction problem with an explicit model of the CT scanning, including its stochastic nature, and a prior model that incorporates our knowledge about what a good CT image should look like. However, constructing such prior knowledge is more complicated than it seems. In this article, we propose a novel neural network for CT image reconstruction. The network is based on the IR formulation and constructed with a recurrent neural network (RNN). Specifically, we transform the gated recurrent unit (GRU) into a neural network performing CT image reconstruction. We call it "GRU reconstruction." This neural network conducts concurrent dual-domain learning. Many deep learning (DL)-based methods in medical imaging are single-domain learning, but dual-domain learning performs better because it learns from both the sinogram and the image domain. In addition, we propose backpropagation through stage (BPTS) as a new RNN backpropagation algorithm. It is similar to the backpropagation through time (BPTT) of an RNN; however, it is tailored for iterative optimization. Results from extensive experiments indicate that our proposed method outperforms conventional model-based methods, single-domain DL methods, and state-of-the-art DL techniques in terms of the root mean squared error (RMSE), the peak signal-to-noise ratio (PSNR), and the structure similarity (SSIM) and in terms of visual appearance.
This paper presents a deep learning (DL) based method called TextureWGAN. It is designed to preserve image texture while maintaining high pixel fidelity for computed tomography (CT) inverse problems. ...Over-smoothed images by postprocessing algorithms have been a well-known problem in the medical imaging industry. Therefore, our method tries to solve the over-smoothing problem without compromising pixel fidelity.
The TextureWGAN extends from Wasserstein GAN (WGAN). The WGAN can create an image that looks like a genuine image. This aspect of the WGAN helps preserve image texture. However, an output image from the WGAN is not correlated to the corresponding ground truth image. To solve this problem, we introduce the multitask regularizer (MTR) to the WGAN framework to make a generated image highly correlated to the corresponding ground truth image so that the TextureWGAN can achieve high-level pixel fidelity. The MTR is capable of using multiple objective functions. In this research, we adopt a mean squared error (MSE) loss to maintain pixel fidelity. We also use a perception loss to improve the look and feel of result images. Furthermore, the regularization parameters in the MTR are trained along with generator network weights to maximize the performance of the TextureWGAN generator.
The proposed method was evaluated in CT image reconstruction applications in addition to super-resolution and image-denoising applications. We conducted extensive qualitative and quantitative evaluations. We used PSNR and SSIM for pixel fidelity analysis and the first-order and the second-order statistical texture analysis for image texture. The results show that the TextureWGAN is more effective in preserving image texture compared with other well-known methods such as the conventional CNN and nonlocal mean filter (NLM). In addition, we demonstrate that TextureWGAN can achieve competitive pixel fidelity performance compared with CNN and NLM. The CNN with MSE loss can attain high-level pixel fidelity, but it often damages image texture.
TextureWGAN can preserve image texture while maintaining pixel fidelity. The MTR is not only helpful to stabilize the TextureWGAN's generator training but also maximizes the generator performance.
Metal Artifact Reduction (MAR) is one of the most challenging problems in Computed Tomography (CT) imaging. In CT imaging, metal implants in patient's bodies cause artifacts due to several factors, ...such as beam hardening effects, statistical property changes of the X-ray beams, and the shapes of metal implants. Although some promising results have been achieved by previously proposed model-based iterative reconstruction (IR) techniques, there is still much room for improvement. One of the problems is that the image prior models used in the IR techniques are too simple to capture the truly complex nature of the CT images. Recent advances in neural network deep learning (DL) can help address this problem and potentially improve MAR results significantly. In this work, we describe a novel DL-based technique for CT MAR. In this technique, we introduce a novel deep neural network based on an IR formulation and a convex optimization technique known as the FISTA (Fast Iterative Shrinkage-Thresholding Algorithm). The neural network, called the RNN-MAR, is an RNN (Recurrent Neural Network) composed of a set of proposed RFUs (Recurrent FISTA Units). While the structure of the RFU has some connections to the GRU (Gated Recurrent Unit), it is specifically designed for CT MAR. The RNN-MAR conducts dual-domain learning (image and sinogram) but can do this using only one objective function. Furthermore, unlike previous CT MAR techniques, the RNN-MAR does not use a binary metal trace. Instead, we used a novel real-valued sinogram domain confidence map, leading to smoother edges. Results from extensive experiments indicate that our RNN-MAR outperforms state-of-the-art DL MAR techniques in terms of the Root Mean Squared Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), and Structure Similarity (SSIM) as well as in terms of visual appearance.
Many algorithms and methods have been proposed for inverse image processing applications, such as super-resolution, image de-noising, and image reconstruction, particularly with the recent surge of ...interest in machine learning and deep learning methods. As for Computed Tomography (CT) image reconstruction, the most recently proposed methods are limited to image domain processing, where deep learning is used to learn the mapping between a true image data set and a noisy image data set in the image domain. While deep learning-based methods can produce higher quality images than conventional model-based algorithms, these methods have a limitation. Deep learning-based methods used in the image domain are insufficient to compensate for lost information during a forward and backward projection in CT image reconstruction, especially with high noise. This dissertation proposes new iterative reconstruction algorithms implemented by the Recurrent Neural Network (RNN). The RNN is usually used to process sequential data, such as a stock price prediction or natural language processing. In this dissertation, we use the RNN to implement the iterative reconstruction (IR), where the RNN performs an iterative optimization for CT image reconstruction. Besides, we propose new RNN memory cells called Gated Momentum Unit (GMU) and Recurrent FISTA Unit (RFU) to keep the RNN cell preserve a long-term memory. The GMU and GFU are similar to the Long-Short Term Memory (LSTM) and the Gated Recurrent Unit (GMU), in which the RNN cells alleviate a banishing and an exploding gradient problem. The GMU and GFU have simpler network structures than the LSTM and the GRU, and they are particularly designed to accelerate the convergence of the training optimization process. We conducted a simulation study and a real CT image study to demonstrate that these proposed methods achieved the highest Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM). The GMU was evaluated in CT image reconstruction, and the GFU was evaluated in CT Metal Artifact Reduction (CT MAR). Also, we showed these algorithms converged faster than other well-known methods. Furthermore, in the fourth chapter of this dissertation, we discuss how vital image texture is in inverse image processing problems. Many methods have been proposed for these problems; however, the most popular methods, the convolutional neural network (CNN) based methods with a Mean Squared Error (MSE) are known to over-smooth images due to the nature of the MSE. The MSE-based methods minimize Euclidean distance for all pixels between a baseline image and a CNN-generated image and ignore the pixels' spatial information, such as image texture. The chapter of this dissertation proposes a new method based on Wasserstein GAN (WGAN) for inverse problems. We showed that the WGAN-based method was effective in preserving image texture. It also used a maximum likelihood estimation (MLE) regularizer to preserve pixel fidelity. Maintaining image texture and pixel fidelity is an essential requirement in medical imaging. We used PSNR and SSIM to evaluate the proposed method quantitatively. We also conducted first-order and second-order statistical image texture analysis to assess image texture.
This paper discusses the group control of elevators for improving efficiency, an efficient control method for multi-car elevator using reinforcement learning is proposed. In the method, the control ...agent selects the best strategy among four strategies, namely Transportation strategy, Passenger strategy, Zone strategy, and Difference strategy according to traffic flow. The control agent takes the number of total passengers and the distance from the departure floor to the destination floor of a call into account. Through experiments, the performance of the proposed method is shown, the average service time of the proposed method is compared with the average service time obtained for the cases where the car assignment is made by each of the three or four strategies.
Many algorithms and methods have been proposed for inverse problems particularly with the recent surge of interest in machine learning and deep learning methods. Among all proposed methods, the most ...popular and effective method is the convolutional neural network (CNN) with mean square error (MSE). This method has been proven effective in super-resolution, image de-noising, and image reconstruction. However, this method is known to over-smooth images due to the nature of MSE. MSE based methods minimize Euclidean distance for all pixels between a baseline image and a generated image by CNN and ignore the spatial information of the pixels such as image texture. In this paper, we proposed a new method based on Wasserstein GAN (WGAN) for inverse problems. We showed that the WGAN-based method was effective to preserve image texture. It also used a maximum likelihood estimation (MLE) regularizer to preserve pixel fidelity. Maintaining image texture and pixel fidelity is the most important requirement for medical imaging. We used Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) to evaluate the proposed method quantitatively. We also conducted first-order and second-order statistical image texture analysis to assess image texture.
A simple and sensitive method has been developed for the fractional determination of nanogram amounts of vanadium(IV) and vanadium(V) in natural waters. It is based on the solvent extraction of these ...ions with N- cinnamoyl-N-(2, 3-xylyl)hydroxylamine into toluene at different pH values, followed by back-extraction into a sodium hydroxide solution and on the determination of vanadium by a catalytic method using the oxidative coupling reaction of 4-aminoantipyrine with N, N-dimethylaniline. Vanadium(IV) and vanadium(V) at the 0.1-1.0ng ml-1 level can be determined. The proposed method suffers few interferences and can be successfully applied to the fractional determination of vanadium in natural water.
This paper proposes a method to generate backchannel responses of a chatting bot in a text-based chat. We have been studied methods of activating text-based discussions. In our previous work, we have ...proposed a method to post questions by a chatting bot to activate discussions. Posting questions by the chatting bot could obtain answers from discussion participants. Some of the answers contributed to increasing the number of topics in discussions. However, all questions are not answered carefully by the discussion participants. Some of the questions are passed through by the discussion participants. Careful answering is required to activate text-based discussions. We thought that the discussion participants did not recognize the chatting bot as a member of a discussion because the chatting bot posted only questions. The chatting bot seemed not to listen to the others comments.Therefore, the discussion participants did not answer all questions carefully. We suppose that the chatting bot is recognized as a member of a discussion and questions of the chatting bot are answered carefully by discussion participants if the chatting bot posts back-channel responses to the others comments. The chatting bot posts back-channel responses with three satisfied conditions that have been obtained from preliminary experiments. In evaluation experiments, we compared the chatting bot of the proposed method with a chatting bot that gives only questions. We evaluated the efficiency of the two chatting bots for activating text-based discussions. We evaluated the degree of activation of a discussion using the number of topics obtained in the discussion and the rate of carefully answered questions. When using the proposed chatting bot, both the number and the rate were higher than those of the comparative chatting bot. The results indicate that discussions are more activated by posting both questions and back-channel responses by the chatting bot. The three types of back-channel response were posted appropriately instead of a condition: when a comment with a question mark is posted, the chatting bot posts a back-channel response as representing an agreement with the posted comment.
It is a real fact that many problems arise, as the communication in human society has been greatly changed in accordance with the turning up of the mass media. To study these problems of the ...communication of human society, we have to take the so-called “primary process” as Edward Sapir says into the consideration. Thus, in this aspect, this report will point out what characteristics the communication in human society have and what fnnction it will carry out. It is clear that there must be some communication, among whatever it may be human or animal group as long as they lead collective life. Yet, the communication in human society is quite different from that of animal group. It is because that only human beings can use the symbol, and the communication of the human society is symbolic communication. The symbol is arbitrary sign which has no relation with the experimental facts in its quality, then it does not loose its meaning even when it is separated from the logical coherence of the fact. These symbol include languages, letters, drawings, etc., and each has its own characteristics respectively. It has a great significance in human development that human beings has such symbolic communication. That is, if culture is defined as Iver Jr. syas, “recurring patterns of behavior or results of behavior which are shared and which can be transmitted from group to group and generation to genertion”, culture could not exist unless symbolic communication is premised because only symbolic communication is able to overcome time and space. It is also said that the existence and the development of human society, as far the human society is understood as cultural scope, are surely depend on the mentioned communication. The symbolic communication has the function to maintain the unification and the harmonious change of human society. This communication has been changed its phases recently in connection with development of the society. That is, there needs some media which makes the communication more effective, as the society assumes considerable enlargement and specialization, it is a matter of course mass media comes out consequently.