The cost of manufacturing by milling is strongly influenced by the tool sequences selected to mill out removal volumes. This thesis develops algorithms that automate and optimize this process for 2.5 ...axis and 3 axis rough milling. The problem of selecting tool sequences has been formulated as a shortest path search in a single-source single-sink directed acylic graph under certain assumptions. The nodes in the graph represent the state of the stock after a particular tool is done machining. The weights of the edges represent the cost of machining. Using a sequence of tools to machine a pocket necessitates the decomposition of the pocket into sub-pockets for each tool in the sequence. An associated problem of pocket decomposition is open-edge covering where the boundary between sub-pockets assigned to different tools has to be traversed over by a tool for complete machining. Algorithms have been developed for extending sub-pockets to cover inter sub-pocket boundaries. Tool holder collision becomes a serious issue when machining parts that have nested pockets. Algorithms have been developed in this thesis to incorporate the tool holder collision into the optimal tool sequence selection problem. If tool holder collisions exist, these algorithms eliminate the generation of redundant data and thus reduce the complexity of building the tool sequence graph. A cost model has been developed for accounting not only machining costs but also tool wear costs. The machining costs are derived from the generation of actual tool paths. The cost model can be tuned to either optimize total cost or time. A concise schema for a tool database has been developed. This tool database not only captures tool geometry but also associated cutting parameters depending on the work piece material and the type of cutting use. Air-time is the non productive time spent by a tool in traversing from one machining region to another. This time can be significant, especially when multiple tools are used to machine a part. Smaller tools in particular may have to traverse over a wide area to reach several disconnected regions. The problem of minimizing air-time has been formulated as a variant of the standard traveling salesman problem, called the sequential ordering problem.
Agent-based modeling is a technique for modeling dynamic systems from the bottom up. Individual elements of the system are represented computationally as agents. The system-level behaviors emerge ...from the micro-level interactions of the agents. Contemporary state-of-the-art agent-based modeling toolkits are essentially discrete-event simulators designed to execute serially on the Central Processing Unit (CPU). They simulate Agent-Based Models (ABMs) by executing agent actions one at a time. In addition to imposing an un-natural execution order, these toolkits have limited scalability. In this article, we investigate data-parallel computer architectures such as Graphics Processing Units (GPUs) to simulate large scale ABMs. We have developed a series of efficient, data parallel algorithms for handling environment updates, various agent interactions, agent death and replication, and gathering statistics. We present three fundamental innovations that provide unprecedented scalability. The first is a novel stochastic memory allocator which enables parallel agent replication in O(1) average time. The second is a technique for resolving precedence constraints for agent actions in parallel. The third is a method that uses specialized graphics hardware, to gather and process statistical measures. These techniques have been implemented on a modern day GPU resulting in a substantial performance increase. We believe that our system is the first ever completely GPU based agent simulation framework. Although GPUs are the focus of our current implementations, our techniques can easily be adapted to other data-parallel architectures. We have benchmarked our framework against contemporary toolkits using two popular ABMs, namely, SugarScape and StupidModel.
The mechanisms which lead to high tree species diversity in forests are not yet fully understood. One of the leading theories is that the natural enemies' interaction can give rise to a survival ...advantage for rare tree species over more common species. One way of exploring such observations is through the use of individual based modeling. An individual-based model (IBM) is a bottom up simulation where the bulk dynamics emerge from the interaction of individual constituents. Due to their emergent nature, IBMs are population sensitive where achieving a high degree of accuracy is synonymous with matching system population sizes. Consequently such models may run into the millions of individuals and become computationally intensive. Here the computing power of graphics processing units (GPUs) is used to overcome this computation limitation. The algorithms developed here for GPUs allow this model to be scaled into the millions of individuals and run on standard desktop computers. This effectively puts supercomputing power at the fingertips of researchers, students, and forest management services alike. The parallel implementation developed here was compared against a serial implementation running on the central processing unit. The results show a significant perfomance gain for the parallel implementation while maintaining statistical accuracy. This shows that realistically sized models can be efficiently executed on inexpensive mass-market desktop computer hardware.
In this paper we describe a new brute force algorithm for building the \(k\)-Nearest Neighbor Graph (\(k\)-NNG). The \(k\)-NNG algorithm has many applications in areas such as machine learning, ...bio-informatics, and clustering analysis. While there are very efficient algorithms for data of low dimensions, for high dimensional data the brute force search is the best algorithm. There are two main parts to the algorithm: the first part is finding the distances between the input vectors which may be formulated as a matrix multiplication problem. The second is the selection of the \(k\)-NNs for each of the query vectors. For the second part, we describe a novel graphics processing unit (GPU) -based multi-select algorithm based on quick sort. Our optimization makes clever use of warp voting functions available on the latest GPUs along with use-controlled cache. Benchmarks show significant improvement over state-of-the-art implementations of the \(k\)-NN search on GPUs.
The Short Message Service (SMS) is one of the most successful services in existing cellular networks. The SMS provides a means of sending messages of limited size to and from Global System for Mobile ...Communications (GSM) or Universal Mobile Telecommunications System (UMTS) phones. SMS technology has evolved out of the GSM standard and presently, the 3rd Generation Partnership Project (3GPP) maintains the SMS standards. Because of its ease to use and cost effectiveness SMS has become one of the popular services in the mobile communication world. The main objective of this paper is to introduce a methodology to provide Short Message Service over Internet Protocol (SoIP) network, which is addressed to mobile or soft phone users connected over IP network. The proposed method is based on sending SMS directly to the Short Message Switching Centre (SMSC) using the Short Message Peer to Peer (SMPP) communication protocol over IP network. SMPP protocol design has been coded in Java/Microsoft Visual Basic and simulated in Android Mobile Simulator. The design is synthesized using Eclipse Integrated Development Environment (IDE) with Android development plug-in. The results obtained are validated against design specifications. At present SoIP system is tested on Android mobile simulator and the work is underway in implementing the solution on Android mobile platform. Using this method, direct communication with SMSC, higher degree of message throughput and reduced cost per SMS can be achieved.
Background:Cardiovascular malformations are the most common cause of congenital malformations, the diagnosis of which requires a close observation in the neonatal period. Early recognition of CHD is ...important in the neonatal period, as many of them may be fatal if undiagnosed and may require immediate intervention.Objectives:• To study the epidemiology of neonatal cardiac murmurs• To identify clinical characteristics which differentiates pathological murmur from functional murmurs• To assess the reliability of clinical evaluation in diagnosing CHD.Methods:The study population included all neonates admitted to the NICU, Postnatal ward, attending Pediatric OPD or neonatal follow up clinic and were detected to have cardiac murmurs. It was a crossectional study over a period of 16months. A clinical diagnosis was made based on history and clinical examination. Then Chest X-ray and ECG were done in symptomatic infants. Echocardiography was done in all neonates for confirmation of the diagnosis. These neonates were again examined daily till they were in hospital and during the follow-up visit at 6 weeks.Results:A total of 61 neonates were included and was conducted over a period of 16 months.The incidence of cardiac murmurs among intramural neonates was 13.5 for 1000 live births. Most frequent symptom was fast breathing in 10(16.4%) cases. VSD was the most common diagnosis clinically in 19(31.47%) babies.The most frequent Echo diagnosis was acyanotic complex congenital heart disease in 23(37.7%)cases followed by 10(16.4%) cases each of VSD and ASD respectively.Overall in our study 73.77%(45cases) of the murmurs were diagnosed correctly and confirmed by EchocardiographyInterpretation & Conclusion:1.It is possible to make clinical diagnosis in many cases of congenital heart diseases.2.The functional murmurs could be differentiated from those arising from structural heart disease.3.By evaluation of these infants only based on murmurs few congenital heart diseases can be missed.
The COVID-19 pandemic and ensuing lockdowns have restricted regular clinical physiotherapy services. This has necessitated a sudden shift to the use of telerehabilitation to prevent disruption in the ...delivery of physiotherapy interventions. This survey investigates the perceptions of physiotherapists in India and their willingness to use telerehabilitation during the pandemic. An electronic questionnaire was sent to 176 physiotherapists around India, and 118 completed questionnaires were received (acceptance rate of 67.04%). A majority of the respondents (n=67; 77%) had used telerehabilitation for the first time during the pandemic, and 72.9% (n=86) found telerehabilitation to be a viable option for healthcare delivery during the pandemic. Some of the barriers identified were lack of training (n=64; 52%) and a lack of connection between information and communication technology experts and clinicians (n=62; 52.5%). Overall, physiotherapists in India expressed a positive perception of telerehabilitation and are willing to use such services.