Subject of study: technologies implemented in modern video coding algorithms to ensure the appropriate level of reliability in the conditions of their compact presentation. The goal is to develop a ...technology for transforming the alphabet of a video information based on a quantitative criterion while ensuring the required quality in networks. Objectives: to formulate requirements to video images in dynamic video surveillance systems; to analyze the existing factors leading to an imbalance between the compression and quality characteristics of existing video coding algorithms; to develop a technology for transforming the alphabet of a video information based on a quantitative criterion (attribute) for the best presentation of the encoded data; to develop a mathematical model for the formation of a quantitativeindicator for the transformation of the video images; to analyze the effectiveness of using the developed mathematical model for the formation of a quantitative indicator to provide the required trustworthiness of data for the video information resource; to assess the effectiveness of the developed technology for transforming the original message in terms of a quantitative indicator to ensure the best presentation of the encoded data; to investigate the dynamics of the probabilistic and statistical characteristics of the original message as a result of transformation according to the quantitative criterion of the significance of the elements. The research methods: compression coding methods implemented on the basis of the JPEG algorithms. The research results: a new approach has been proposed based on the transformation of the encoded alphabet of data by use of a quantitative criterion. A mathematical model has been developed for the formation of a quantitative attributethat determines the significance of the elements of the original message. Conclusions. A technology has been developed for transforming the alphabet of the original message, which allows creating conditions for a more profitable presentation of the encoded data due to a significant increase in the dynamic range of probabilistic and statistical characteristics for the transformed message while ensuring the required level of video image quality.
Triangular arrowheads are overwhelmingly the dominant projectile point form across eastern North America from 600 to 1600 CE. Although triangular points have been studied less than earlier ...technologies, important research has been conducted over the last 25 years on their morphology, function, and temporal relationships. One important observation from reading these works is that there is noticeable variability within the triangular form both between and within regions. However, this variability has not been studied extensively by quantitative means. In this research, we examine a collection of 199 points from two Piedmont Siouan sites in the upper Yadkin River valley dating to 1300-1600 CE. We analyzed seven discrete attributes using discriminant function analysis and found quantitative support for the contemporaneous existence of the three forms and evidence of changes in morphology over time. We follow this with an examination of the context and breakage patterns of these types to discuss their roles in social, political, and economic activities. We then compare our results to those from other areas of eastern North America to address why such variability and changes over time may have occurred.
Nowadays access control has an important role in management of access to system resources. Almost all of the current attribute based access control (ABAC) models do not meet all the operational needs ...in access decision while it absolutely will be an important role in the future access control models. We believe that meeting the operational need in missions is necessary in ranking the attributes which are existed in the access control policies. In this paper we propose a new approach that helps to enhance ABAC. It creates the quantitative capability which we named it quantitative ABAC and is based on decision fusion. We determine the attributes which have the most important role in the enterprises access management. Then, the experts consider and prioritize the attributes to determine which of attributes are more important than others. In the other word, we provide the weights as the importance of access control attributes based on decision-makers’ viewpoints by utilizing ordered weighted averaging for the proposed prioritization. As the result of this research, it is possible that if there are some corrected values with high ranking among
N
parameters in attributes, the permission may be granted. The case study considerations show that decision fusion can be useful in solving some challenges of risk adaptable access control models. This enables policy-makers to manage and control the system resources more accurately and flexibility in integrated and complex systems. The results of this study could be useful in integrated environments such as c4i systems.
본 연구에서는 민감한 변수와 변환된 변수로 구성된 Bar-Lev 등 (2004)의 승법모형에 무관한 양적변수를 새롭게 추가한 승법 무관양적속성 확률화응답모형을 제안하였다. 그리고 무관한 양적변수에 대한 정보를 알 때와 모를 때로 구분하여 민감한 양적속성 추정에 대한 이론적 체계를 마련하고자 하였다. 또한 제안한 승법 무관양적속성 확률화응 답모형과 기존의 ...승법모형인 Eichhorn-Hayre 모형, Bar-Lev 등의 모형, 그리고 Gjestvang-Singh 모형과의 관계를 살펴보았고, Bar-Lev 등의 모형과의 효율성을 비교하였다. 그 결과, 기존의 승법모형들이 제안한 승법 무관양적 속성 확률화응답모형의 특별한 경우임을 확인할 수 있었고, 제안한 모형과 Bar-Lev 등의 모형과의 효율성을 수치적으로 비교한 결과 C x (=σ x /μ x )값이 작을수록 그리고 C z (=σ z /μ z ) 값이 클수록 제안한 승법 무관양적속성 확률 화응답모형이 Bar-Lev 등의 모형보다 효율성이 좋게 나타남을 알 수 있었다. 그리고 제안한 승법 무관양적속성 모형은 p 1 = p값이 커질수록 또한μ z = 1일 때 보다μ z = 0:5일 때가 더 효율적인 것으로 나타났다.
We augment an unrelated quantitative attribute to Bar-Lev et al.’s model (2004) which is composed of sensitive quantitative variable and scrambled one to present a multiplicative unrelated quantitative randomized response model(MUQ RRM). We also establish theoretical grounds to estimate the sensitive quantitative attribute according to circumstances irrespective of known or unknown unrelated quantitative attribute. Finally, we explore the relationship among the suggested model, Eichhorn-Hayre model, Bar-Lev et al.’s model and Gjestvang- Singh’s model, and compare the efficiency of our model with Bar-Lev et al.’s model.
본 논문에서는 사회적으로나 개인적으로 매우 민감한 조사에서 조사하고자 하는 모집단이 여러 개의 층으로 구성되어 있고, 각 층이 양적인 속성으로 되어 있는 경우에 Himmelfarb-Edgell의 가법 모형과 Gjestvang-Singh의 가법 모형에 단순임의추출법 대신에 층화추출법을 적용한 층화 가법 양적속성 확률화응답모형을 제안하였다. 제안한 두 ...모형으로부터 각 층의 양적속성에 대한 모평균의 추정뿐만 아니라 모집단 전체 모평균에 대한 추정을 할 수 있는 이론적 체계를 마련하였다. 그리고 제안한 두 모형에서 비례배분과 최적배분 문제를 다루었으며, 각 배분법에 따른 분산식을 도출하였다. 마지막으로 두 층화 가법 양적속성 확률화응답모형들 간의 효율성을 비교해 본 결과 Gjestvang-Singh의 층화 가법 모형이 Himmelfarb-Edgell의 층화 가법 모형보다 효율적으로 나타났고, 특히 αhβh값이 작을수록 즉, 제시한 모형의 특성이 직접질문에 가까워질수록 Gjestvang-Singh의 층화 가법 모형의 효율성이 커짐을 알 수 있었다.
For a sensitive survey in which the population is composed by several strata with quantitative attributes, we present an additive stratified quantitative attribute randomized response model which applied stratified random sampling instead of simple random sampling to the models of Himmelfarb-Edgell`s additive quantitative attribute model and Gjestvang-Singh`s. We also establish theoretical grounds to estimate the stratum mean of sensitive quantitative attributes as well as the overall mean. We deal with the proportional and optimal allocation problems in each suggested model and compare the relative efficiency of the suggested two models; subsequently, Himmelfarb-Edgell`s model is more efficient than Gjestvang-Singh`s model under the condition of stratified random sampling.
We suggest a new framework for classification rule mining in quantitative data sets founded on Bayes theory – without univariate preprocessing of attributes. We introduce a space of rule models and a ...prior distribution defined on this model space. As a result, we obtain the definition of a parameter-free criterion for classification rules. We show that the new criterion identifies interesting classification rules while being highly resilient to spurious patterns. We develop a new parameter-free algorithm to mine locally optimal classification rules efficiently. The mined rules are directly used as new features in a classification process based on a selective naive Bayes classifier. The resulting classifier demonstrates higher inductive performance than state-of-the-art rule-based classifiers.
Contrast sets have been shown to be a useful mechanism for describing differences between groups. A contrast set is a conjunction of attribute-value pairs that differ significantly in their ...distribution across groups. These groups are defined by a selected property that distinguishes one from the other (e.g customers who default on their mortgage versus those that don’t). In this paper, we propose a new search algorithm which uses a vertical approach for mining maximal contrast sets on categorical and quantitative data. We utilize a novel yet simple discretization technique, akin to simple binning, for continuous-valued attributes. Our experiments on real datasets demonstrate that our approach is more efficient than two previously proposed algorithms, and more effective in filtering interesting contrast sets.
Hyperclique patterns are groups of objects which are strongly related to each other. Indeed, the objects in a hyperclique pattern have a guaranteed level of global pairwise similarity to one another ...as measured by uncentered Pearson’s correlation coefficient. Recent literature has provided the approach to discovering hyperclique patterns over data sets with binary attributes. In this paper, we introduce algorithms for mining maximal hyperclique patterns in large data sets containing quantitative attributes. An intuitive and simple solution is to partition quantitative attributes into binary attributes. However, there is potential information loss due to partitioning. Instead, our approach is based on a normalization scheme and can directly work on quantitative attributes. In addition, we adopt the algorithm structures of three popular association pattern mining algorithms and add a critical clique pruning technique. Finally, we compare the performance of these algorithms for finding quantitative maximal hyperclique patterns using some real-world data sets.
To detect groups in networks is an interesting problem with applications in social and security analysis. Many large networks lack a global community organization. In these cases, traditional ...partitioning algorithms fail to detect a hidden modular structure, assuming a global modular organization. We define a prototype for a simple local-first approach to community discovery, namely the democratic vote of each node for the communities in its ego neighborhood. We create a preliminary test of this intuition against the state-of-the-art community discovery methods, and find that our new method outperforms them in the quality of the obtained groups, evaluated using metadata of two real world networks. We give also the intuition of the incremental nature and the limited time complexity of the proposed algorithm.