The paper presents the first sentiment-annotated lexicon of the Bosnian language. The annotation process and methodology are presented along with a usability study, which concentrates on language ...coverage. The composition of the starting base was done by translating the Slovenian annotated lexicon and later manually checking the translations and annotations. The language coverage was observed using two reference corpora. The Bosnian language is still considered a low-resource language. A reference corpus comprised of automatically crawled web pages is available for the Bosnian language, but the authors had a hard time sourcing any corpora with a clear time frame for the text contained therein. A corpus of contemporary texts was constructed by collecting news articles from several Bosnian web portals. Two language coverage methods were used in this experiment. The first used a frequency list of all words extracted from two reference Bosnian language corpora, and the second ignored the frequencies as the main factor in counting. The computed coverage using the first presented method for the first corpus was 19.24%, while the second corpus yielded 28.05%. The second method yielded 2.34% coverage for the first corpus and 6.98% for the second corpus. The results of the study present a language coverage that is comparable to the state of the art in the field. The usability of the lexicon was already proven in a Twitter-based comparison.
The Bosnian language holds significant importance as a member of the West-South Slavic subgroup within the Slavic branch of the Indo-European linguistic family. With approximately 2.5 million ...speakers in Europe, including 1.87 million individuals in Bosnia and Herzegovina alone, the Bosnian language constitutes the mother tongue for a considerable portion of the population.
In Natural Language Processing (NLP) tasks related to the Bosnian language, besides removing stop words, it is important to consider the influence of other linguistic elements. Bosnian text contains words derived from diminishers, relative intensifiers, minimizers, maximizers, boosters, and approximators. These words contribute to the overall meaning and sentiment analysis of the text. By including these elements in NLP models and algorithms, researchers can achieve more accurate and nuanced analysis of Bosnian language data, enhancing the effectiveness of NLP applications.
The two lists of sentiment annotated words that present the core of the Bosnian sentiment-annotated lexicon, a list of the stopwords, and a list of Affirmative and non-Affrimative words (AnAwords) composed mostly of intensifiers and diminishers, were used to construct a dataset that presents the base for sentiment analysis in the Bosnian language.
Nowadays, several news portals, government websites, and social media sites are generating a massive amount of digitalized Hindi textual information. Stopword removal is a significant factor in text ...mining tasks that helps the miner to enhance the performance of a system. This paper attempts to construct the corpus specific stopwords lists for Hindi text documents using statistical and knowledge-based methods. In order to prepare the stopwords list, the proposed method considers the ranking of the words given by different methods followed by normalization of the outcomes of these methods using the social choice theory based vote ranking method. Further, we propose an evaluation method to evaluate the prepared stopword lists and investigate their behavior using text mining models. We also compare our prepared stopword lists with the baselines and conclude that the technique which fetches the best features does not necessarily identify the candidate stop words. To the best of our knowledge, the proposed approach guarantees the removal of candidate stop words and has the least information dissipation.
A preliminary preprocessing step in text analytics is the removal of words with no semantic meaning, otherwise known as stopwords. English stopwords are very easily accessible and created due to the ...broad usability of the English language. However, a standard list of Hindi stopwords is still missing. This paper proposes an exhaustive list of generic Hindi stopwords and a Python package for easy distribution and usage. The methodology uses a dual mechanism for creating a list of Hindi stopwords. First, the famous English stopwords are collected and translated into meaningful Hindi words (group 1). Second, unique Hindi stopwords from multiple sources are fetched (group 2). Finally, the respective Hindi stopwords from groups 1 and 2 are combined, which resulted in a significantly large set of 820 Hindi stopwords. Additionally, the list of Hindi stopwords is made openly available for use at the Python Package Index (PyPI) repository as a Python package, which is named
LiHiSTO
. With the help of illustrative implementations, it is shown that LiHiSTO provides abstract and easy access to the list of stopwords for users to perform Hindi text analytics.
The tedious challenging of Big Data is to store and retrieve of required data from the search engines.
Problem Defined
There is an obligation for the quick and efficient retrieval of useful ...information for the many organizations. The elementary idea is to arrange these computing files of organization into individual folders in an hierarchical order of folders. Manually, to order these files into folders, there is an ardent need to know about the file contents and name of the files to give impression of files, so that it provides an alignment of certain set of files as a bunch.
Problem Statement
Manual grouping of files has its own complications, for example when these files are in numerous amounts and also their contents cannot be illustrious by their labels. Therefore, it’s an intense requirement for Document clustering with data processing machines for enthusiastic results.
Existing System
A couple of analyzers are impending with dynamic algorithms and comprehensive analogy of extant algorithms, but, yet, these have been restricted to organizations and colleges. After recent updated rules of NMF their raised a self interest in document clustering. These rules gave trust in its performances with better results when compared to Latent Semantic Indexing with Singular Value Decomposition.
Proposed System
A new working miniature called Novel K-means Non-Negative Matrix Factorization (KNMF) is implemented using renovated guidelines of NMF which has been diagnosed for clustering documents consequently. A new data set called Newsgroup20 is considered for the exploratory purpose. Removal of common clutter/stop words using keywords from Key Phrase Extraction Algorithm and a new proposed Iterated Lovin stemming will be utilized in preprocessing step inassisting to KNMF. Compared to the Porter stemmer and Lovins stemmer algorithms, Iterative Lovins algorithm is providing 5% more reduction. 60% of the document terms are been minimized to root as remaining terms are already root words. Eventually, an appeal to these processes named “Progressive Text mining radical” is developed inlateral exertion of K-Means algorithm from the defined Apache Mahout Project which is used to analyze the performance of the MapReduce framework in Hadoop.
While the use of patent mapping tools is growing, the ‘black-box’ systems involved do not generally allow the user to interfere further than the preliminary retrieval of documents. Except, that is, ...for one thing: the stopword list, i.e. the list of ‘noise’ words to be ignored, which can be modified to one’s liking and dramatically impacts the final output and analysis. This paper invokes information science and computer science to provide clues for a better understanding of the stopword lists’ origin and purpose, and how they fit in the mapping algorithm. Further, it stresses the need for stopword lists that depend on the document corpus analyzed. Thus, the analyst is invited to add and remove stopwords—or even, in order to avoid inherent biases, to use algorithms that can automatically create ad hoc stopword lists.