Blockmodeling refers to a variety of statistical methods for reducing and simplifying large and complex networks. While methods for blockmodeling networks observed at one time point are well ...established, it is only recently that researchers have proposed several methods for analysing dynamic networks (i.e., networks observed at multiple time points). The considered approaches are based on k-means or stochastic blockmodeling, with different ways being used to model time dependency among time points. Their novelty means they have yet to be extensively compared and evaluated and the paper therefore aims to compare and evaluate them using Monte Carlo simulations. Different network characteristics are considered, including whether tie formation is random or governed by local network mechanisms. The results show the Dynamic Stochastic Blockmodel (Matias and Miele 2017) performs best if the blockmodel does not change; otherwise, the Stochastic Blockmodel for Multipartite Networks (Bar-Hen et al. 2020) does.
•We consider dynamic networks, i.e., networks measured at multiple time points.•We show which blockmodeling approach is preferred in different conditions.•Blockmodel stability should be considered while selecting a blockmodeling approach.•A larger network and greater differences among block densities produce better results.
•A k-means based algorithm for one-mode clustering of one-mode networks is proposed.•A k-means based algorithm for linked/multilevel networks is proposed.•The k-means based algorithm is much faster ...than generalized blockmodeling (GB).•Simulations show that it is superior to GB for larger networks and never much worse.•The algorithm is applied to a dynamic (two time points) multilevel network.
The paper presents a k-means-based algorithm for blockmodeling linked networks where linked networks are defined as a collection of one-mode and two-mode networks in which units from different one-mode networks are connected through two-mode networks. The reason for this is that a faster algorithm is needed for blockmodeling linked networks that can better scale to larger networks. Examples of linked networks include multilevel networks, dynamic networks, dynamic multilevel networks, and meta-networks. Generalized blockmodeling has been developed for linked/multilevel networks, yet the generalized blockmodeling approach is too slow for analyzing larger networks. Therefore, the flexibility of generalized blockmodeling is sacrificed for the speed of k-means-based approaches, thus allowing the analysis of larger networks. The presented algorithm is based on the two-mode k-means (or KL-means) algorithm for two-mode networks or matrices. As a side product, an algorithm for one-mode blockmodeling of one-mode networks is presented. The algorithm’s use on a dynamic multilevel network with more than 400 units is presented. A situation study is also conducted which shows that k-means based algorithms are superior to relocation algorithm-based methods for larger networks (e.g. larger than 800 units) and never much worse.
Decomposition‐based solution algorithms for optimization problems depend on the underlying latent block structure of the problem. Methods for detecting this structure are currently lacking. In this ...article, we propose stochastic blockmodeling (SBM) as a systematic framework for learning the underlying block structure in generic optimization problems. SBM is a generative graph model in which nodes belong to some blocks and the interconnections among the nodes are stochastically dependent on their block affiliations. Hence, through parametric statistical inference, the interconnection patterns underlying optimization problems can be estimated. For benchmark optimization problems, we show that SBM can reveal the underlying block structure and that the estimated blocks can be used as the basis for decomposition‐based solution algorithms which can reach an optimum or bound estimates in reduced computational time. Finally, we present a general software platform for automated block structure detection and decomposition‐based solution following distributed and hierarchical optimization approaches.
Variational methods for parameter estimation are an active research area, potentially offering computationally tractable heuristics with theoretical performance bounds. We build on recent work that ...applies such methods to network data, and establish asymptotic normality rates for parameter estimates of stochastic blockmodel data, by either maximum likelihood or variational estimation. The result also applies to various sub-models of the stochastic blockmodel found in the literature.
This study reviews the presence of articles related to Central and Eastern Europe (CEE) in Web of Science (WOS). Bibliometric analysis first reveals the trends of CEE-related articles in the areas of ...international business (IB), management and economics up to 2016. The results show steady growth in absolute and relative numbers after 1990, intensifying since 2010. Second, we conduct topic research using network analysis with blockmodeling. We identify a network of topics and their interrelations over time and used them to periodise the CEE-related research in IB. The most-cited CEE-related IB articles and the main citation path are also presented. The analysis adds to the discussion of how the CEE region is explored in IB research, its contributions, impacts and the challenges facing regional research in the future. In this study, a methodology and framework for performing a comprehensive bibliometric analysis on regional IB research is applied.
Changes in patterns of collaboration between Russian universities after the commencement of the Russian university excellence initiative (Project 5-100) are studied in this paper. While this project ...aimed to make leading Russian universities more globally competitive and improve their research productivity, it also happened to increase their cooperation. An analysis of affiliations and the co-authorship networks was conducted to explore scientific collaborations between and within the participating universities. Such analysis facilitates the investigation of the number of collaborations with other organizations, both domestic and international cooperation, and disciplinary differences. By analyzing the co-authorship networks, the position of universities in the academic network and the structure of collaborations among the participants were examined. A sample of 30 Russian universities, including participants in Project 5-100 and a control group of institutions with similar characteristics, was used. After joining the project, the participating universities increased both their cooperation with each other as well as with foreign universities and research institutions of the Russian Academy of Sciences, especially in the high-quality segment. At the same time, the collaboration patterns of non-participating universities did not change significantly. The centrality of Project 5-100 universities in the global academic network has increased, along with their visibility and coupling in the national network. The historical division between university and academic sectors has diminished, while the participating universities have started to play a more important role in knowledge production within the country.
In order to facilitate the processing of vast amounts of data that emerges in the study of fuzzy social networks, scholars have developed various procedures for reducing these networks. Some ...procedures use regular and structural fuzzy relations to reduce such networks. In this article, we generalize the notions of regular and structural fuzzy relations to obtain even better reductions of fuzzy networks. More precisely, for a fuzzy social network given by the set of social entities and the family of fuzzy relations between them, we define μ-approximate regular fuzzy relations, where μ is the degree taken from the underlying set of truth values, which is a complete Heyting algebra. Using these specific fuzzy relations, we show that it is possible to reduce a fuzzy social network in some cases when previously developed algorithms fail to reduce it. We investigate the properties of μ-approximate regular fuzzy relations. We show that the blockmodel of a fuzzy social network, which is its reduced fuzzy social network built via μ-approximate regular fuzzy preorder, retains specific structure-preserving properties. We give a method for calculating the greatest μ-approximate regular fuzzy relation on a given fuzzy social network. For fuzzy social networks defined over the real-unit interval 0,1, we give a procedure that determines all subintervals of 0,1 that share the greatest μ-approximate regular fuzzy relation. Analogous results are provided for μ-approximate structural fuzzy relations.
Blockmodeling linked networks aims to simultaneously cluster two or more sets of units into clusters based on a network where ties are possible both between units from the same set as well as between ...units of different sets. While this has already been developed for generalized and k-means blockmodeling, our approach is based on the well-known stochastic blockmodeling technique, utilizing a mixture model. Estimation is performed using the CEM algorithm, which iteratively estimates the parameters by maximizing a suitable likelihood function and reclusters the units according to the parameters. The steps are repeated until the likelihood function ceases to improve.
A key drawback of the basic algorithm is that it treats all units equally, consequently yielding higher influence to larger parts of the data. The greater size, however, does not necessarily imply higher importance. To mitigate this asymmetry, we propose a solution where underrepresented parts of the data are given more influence through an appropriate weighting. This idea leads to the so-called weighted likelihood approach, where the ordinary likelihood function is replaced by a weighted likelihood.
The efficiency of the different approaches is tested via simulations. It is shown through simulations that the weighted likelihood approach performs better for larger networks and a clearer blockmodel structure, especially when the one-mode blockmodels within the smaller sets are clearer.
•Linked networks contain two or more sets of units and subnetworks.•Subnetworks contain ties among the units of one set or between units of two sets.•Examples of linked networks are also dynamic networks and multilevel networks.•Blockmodeling linked networks jointly partitions all sets of units.•A stochastic blockmodeling approach is utilized to blockmodeling linked networks.•Weighted likelihood is used to balance the impact of different subnetworks.
•A new real-coded genetic algorithm for two-mode KL-means partitioning.•A simulation-based comparison to two-mode KL-means clustering.•A heterogeneity blockmodeling application to data from the ...Turning Point Project.•A linkage of clustering literature from different disciplines.
The two-mode KL-means partitioning (TMKLMP) problem has a number of important applications in the social and physical sciences. For example, the intra-block variability measure associated with TMKLMP underscores its direct relevance to two-mode homogeneity blockmodeling of binary and real-valued social networks. We present a real-coded genetic algorithm for obtaining TMKLMP solutions. A simulation study showed that the new algorithm compares favorably to a multistart implementation of a two-mode KL-means heuristic, which is recognized as a top-performing method for TMKLMP. The merit of the proposed method is demonstrated via an application to the blockmodeling of social network data associated with signing of environmental advertisements in the New York Times as a part of the Turning Point Project.
Unstructured data, mainly text in project documents and final evaluation or summary reports, is a container of tacit knowledge. We present an exploratory case study to use semantic network analysis ...to help capture and formalize elements of this knowledge. Text from project documents and chats by project participants over an interactive BIM platform were collected and arranged in the form of concept networks. With that, we used the rich literature in network science to formally study the networks. We illustrate the proposed approach by analyzing five concept networks. The case study illustrates the benefits of the approach—mainly, to develop a project-specific map of key concepts. Using network measures, such as centrality, identified the project's key issues and how they relate. Clustering measures identified possible knowledge constructs (interrelated concepts). Measures for quantifying the overall structure of the network can also be used to contrast projects.
•Presenting contents of unstructured data (documents) in the form of a as semantic network in manual, semi-manual, and automated means•Showcased the use of network analysis to capture and analyze knowledge contained in unstructured data•Used blockmodeling technique to cluster network and discover possible knowledge constructs•Evaluated the proposed approach through three case projects, a questionnaire and a focus group