Ontology-based Complementary Breastfeeding Search Model Paradhita, Astrid Noviana; Sari, Anny Kartika; Sihabuddin, Agus
IJCCS (Indonesian Journal of Computing and Cybernetics Systems),
07/2022, Volume:
16, Issue:
3
Journal Article
Peer reviewed
Open access
Children's nutritional requirements differ from those of adults. The health ministry's Indonesian data shows that in 2017, there were 17.8% of malnourished children under five years old (toddlers), ...one of which was related to complementary breastfeeding problems. Complementary breastfeeding is given to babies starting at 6–24 months of age. This research aims to build a complementary breastfeeding search model and be able to present it as a treatment for malnourished babies. A search model is built to understand natural language input given by a user. Also, it can do reasoning by applying a set of rules to obtain implicit knowledge about the complementary breastfeeding menu recommended for babies. The methods used in this research are data collection, designing a search model, building an ontology model, building SWRL, natural language processing, and usability testing by users and nutritionists. This research succeeded in building an ontology-based complementary breastfeeding search model in the form of a semantic web. The testing result shows that the web can provide an alternative complementary breastfeeding menu according to the baby’s nutritional needs and has a high usability capability of 4.01 on a scale of 1 to 5.
We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches ...interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We also train ResMLP models in a self-supervised setup, to further remove priors from employing a labelled dataset. Finally, by adapting our model to machine translation we achieve surprisingly good results. We share pre-trained models and our code based on the Timm library.
We investigate the effect of explicitly enforcing the Lipschitz continuity of neural networks with respect to their inputs. To this end, we provide a simple technique for computing an upper bound to ...the Lipschitz constant—for multiple
p
-norms—of a feed forward neural network composed of commonly used layer types. Our technique is then used to formulate training a neural network with a bounded Lipschitz constant as a constrained optimisation problem that can be solved using projected stochastic gradient methods. Our evaluation study shows that the performance of the resulting models exceeds that of models trained with other common regularisers. We also provide evidence that the hyperparameters are intuitive to tune, demonstrate how the choice of norm for computing the Lipschitz constant impacts the resulting model, and show that the performance gains provided by our method are particularly noticeable when only a small amount of training data is available.
Over the last several years, the field of natural language processing has been propelled forward by an explosion in the use of deep learning models. This article provides a brief introduction to the ...field and a quick overview of deep learning architectures and methods. It then sifts through the plethora of recent studies and summarizes a large assortment of relevant contributions. Analyzed research areas include several core linguistic processing issues in addition to many applications of computational linguistics. A discussion of the current state of the art is then provided along with recommendations for future research in the field.
Natural language processing (NLP) has witnessed significant advancements in recent decades. Automatically classifying parts of speech, like nouns and verbs, from textual input has transformed text ...analysis and language understanding. Using natural language processing techniques, we explore various methods for identifying noun and verb phrases automatically, with an emphasis on high accuracy. Our study explores rule-based, statistical, and Machine Learning (ML) approaches for determining the nouns and verbs from sentences. The effectiveness of these approaches is clearly evident, especially when NLP libraries such as SpaCy and the Natural Language Toolkit (NLTK) are used. As well as demonstrating their potential applications across diverse language processing tasks and industries, we conduct comparative research to showcase their advantages and disadvantages. The performance of these methods is also examined in terms of retrieving subject and action terms. SpaCy achieves an impressive accuracy of 95% in noun and verb extraction, while Part-Of-Speech (POS) technology tagging delivers an even higher accuracy of 96%. The results obtained with these methods illustrate how nouns, verbs, and names can be classified in text successfully.
The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been ...perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often referred to as
aleatoric
and
epistemic
. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of attempts so far at handling uncertainty in general and formalizing this distinction in particular.
Semi-supervised learning is the branch of machine learning concerned with using labelled as well as unlabelled data to perform certain learning tasks. Conceptually situated between supervised and ...unsupervised learning, it permits harnessing the large amounts of unlabelled data available in many use cases in combination with typically smaller sets of labelled data. In recent years, research in this area has followed the general trends observed in machine learning, with much attention directed at neural network-based models and generative learning. The literature on the topic has also expanded in volume and scope, now encompassing a broad spectrum of theory, algorithms and applications. However, no recent surveys exist to collect and organize this knowledge, impeding the ability of researchers and engineers alike to utilize it. Filling this void, we present an up-to-date overview of semi-supervised learning methods, covering earlier work as well as more recent advances. We focus primarily on semi-supervised classification, where the large majority of semi-supervised learning research takes place. Our survey aims to provide researchers and practitioners new to the field as well as more advanced readers with a solid understanding of the main approaches and algorithms developed over the past two decades, with an emphasis on the most prominent and currently relevant work. Furthermore, we propose a new taxonomy of semi-supervised classification algorithms, which sheds light on the different conceptual and methodological approaches for incorporating unlabelled data into the training process. Lastly, we show how the fundamental assumptions underlying most semi-supervised learning algorithms are closely connected to each other, and how they relate to the well-known semi-supervised clustering assumption.
Sentiment and emotion analysis is a common classification task aimed at enhancing the benefit and comfort of consumers of a product. However, the data obtained often lacks balance between each class ...or aspect to be analyzed, commonly known as an imbalanced dataset. Imbalanced datasets are frequently challenging in machine learning tasks, particularly text datasets. Our research tackles imbalanced datasets using two techniques, namely SMOTE and Augmentation. In the SMOTE technique, text datasets need to undergo numerical representation using TF-IDF. The classification model employed is the IndoBERT model. Both oversampling techniques can address data imbalance by generating synthetic and new data. The newly created dataset enhances the classification model's performance. With the Augmentation technique, the classification model's performance improves by up to 20%, with accuracy reaching 78%, precision at 85%, recall at 82%, and an F1-score of 83%. On the other hand, using the SMOTE technique, the evaluation results achieve the best values between the two techniques, enhancing the model's accuracy to a high 82% with precision at 87%, recall at 85%, and an F1-score of 86%.
Text classification which is a part of NLP is a grouping of objects in the form of text based on certain characteristics that show similarities between one document and another. One of methods used ...in text classification is LSTM. The performance of the LSTM method itself is influenced by several things such as datasets, architecture, and tools used to classify text. On this occasion, researchers analyse the effect of the number of layers in the LSTM architecture on the performance generated by the LSTM method. This research uses IMDB movie reviews data with a total of 50,000 data. The data consists of positive, negative data and there is data that does not yet have a label. IMDB Movie Reviews data go through several stages as follows: Data collection, data pre-processing, conversion to numerical format, text embedding using the pre-trained word embedding model: Fastext, train and test classification model using LSTM, finally validate and test the model so that the results are obtained from the stages of this research. The results of this study show that the one-layer LSTM architecture has the best accuracy compared to two-layer and three-layer LSTM with training accuracy and testing accuracy of one-layer LSTM which are 0.856 and 0.867. While the training accuracy and testing accuracy on two-layer LSTM are 0.846 and 0.854, the training accuracy and testing accuracy on three layers are 0.848 and 864.