Technological innovation is an important driving force for the regional economy's high-quality development, and determining ways to improve the condition and allocation efficiency of factor endowment ...is key to improving the performance of regional technological innovation. Considering the dynamic conditions of technological progress, we empirically study the endowment conditions and allocation efficiency of technological innovation factors input in the Yangtze River Delta region by using the translog production function to explore the interactive mechanism of innovation factors to technological innovation performance during 2000–2017. The results show the following: First, the three innovation factors—innovative human capital investment, research and development (R&D) fund investment, and fixed asset investment—contributed positively to technological innovation performance output in the Yangtze River Delta region, but there is an obvious gap between Anhui Province and the Jiangsu Province, Zhejiang Province, and Shanghai Municipality. Among the three factors, the contribution rate of R&D fund investment is relatively high, and technological innovation is dependent on R&D fund investment. Second, the biased technological progress of technological innovation factors in the Yangtze River Delta region shows a growth trend, and allocation efficiency of all three innovation factors improved continuously. The provinces' order from high to low allocation efficiency is Jiangsu Province, Shanghai Municipality, Zhejiang Province, and Anhui Province; Zhejiang Province showed the fastest improvement. Finally, the alternative elasticity coefficient of the three innovation factors in Jiangsu Province, Zhejiang Province, and Shanghai Municipality is greater than zero. The proportion structure of the input of technological innovation factors matches the regional technological progress. Technological innovation is in the effective economic range, but the fixed asset investment factors are relatively abundant, and there is room for improvement in the factor configuration structure. In Anhui Province, the alternative elasticity of innovative human capital investment and innovation R&D fund investment during 2000–2012 is less than zero, and the factor configuration structure keeps Anhui Province's technological innovation output in the uneconomic range, which is an internal factor that hinders the output of technological innovation performance in this province. They are not inconsistent that input and contribution rate of the innovation factors in the Yangtze River Delta region. The result of this study shows that technological innovation performance not only depends on the input of innovation factors, but also be affected by the allocation efficiency of innovation factors. In order to promote the economy's high-quality development in the Yangtze River Delta region, the government need not only pay attention to the input of innovation factors, but also pay more attention to improve the allocation efficiency of innovation factors.
Fig. 3. Input factor substitution elasticity and its changes in the Yangtze River Delta Region.
As Fig. 3 shows, the combination of these input factors makes the technological innovation output of the Yangtze River Delta region fall in the effective economic range. From the perspective of the trend of substitution elasticity, the elasticity of substitution tends to 1 in general, which indicates that the marginal output and factor input of the two input factors tend to change in the same proportion, and the input combination method is more optimized when other input factors are unchanged. Display omitted
•Innovation factors' effect on innovation performance in Yangtze River Delta measured.•Translog production function explores the interactive mechanism of technological innovation factors and performance.•Innovative human capital, R&D, and fixed asset investments contributed positively•Anhui has lowest allocation efficiency and technological innovation.•The government need pay more attention to improve the allocation efficiency of innovation factors.
In the activated sludge process, reducing the operational dissolved oxygen (DO) concentration can improve oxygen transfer efficiency, thereby reducing energy use. The low DO, however, may result in ...incomplete nitrification. This research investigated the long-term effect of low DO on the nitrification performance of activated sludge. Results indicated that, for reactors with 10 and 40 day solids retention times (SRTs), complete nitrification was accomplished after a long-term operation with a DO of 0.37 and 0.16 mg/L, respectively. Under long-term low DO conditions, nitrite oxidizing bacteria (NOB) became a better oxygen competitor than ammonia oxidizing bacteria (AOB) and, as a result, no nitrite accumulated. Real-time PCR assays indicated that the long-term low DO enriched both AOB and NOB in activated sludge, increasing the sludge nitrification capacity and diminishing the adverse effect of low DO on the overall nitrification performance. The increase in the population size of nitrifiers was likely resulted from the reduced nitrifier endogenous decay rate by a low DO. Under long-term low DO conditions, Nitrosomonas europaea/eutropha remained as the dominant AOB, whereas the number of Nitrospira-like NOB became much greater than Nitrobacter-like NOB, especially for the 40 day SRT sludge. The enrichment and shift of the nitrifier community reduced the adverse effect of low DO on nitrification; therefore, low DO operation of a complete nitrification process is feasible.
HashNet: Deep Learning to Hash by Continuation Zhangjie Cao; Mingsheng Long; Jianmin Wang ...
2017 IEEE International Conference on Computer Vision (ICCV),
2017-Oct.
Conference Proceeding
Odprti dostop
Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality. Deep learning to hash, ...which improves retrieval quality by end-to-end representation learning and hash encoding, has received increasing attention recently. Subject to the ill-posed gradient difficulty in the optimization with sign activations, existing deep learning to hash methods need to first learn continuous representations and then generate binary hash codes in a separated binarization step, which suffer from substantial loss of retrieval quality. This work presents HashNet, a novel deep architecture for deep learning to hash by continuation method with convergence guarantees, which learns exactly binary hash codes from imbalanced similarity data. The key idea is to attack the ill-posed gradient problem in optimizing deep networks with non-smooth binary activations by continuation method, in which we begin from learning an easier network with smoothed activation function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, deep network with the sign activation function. Comprehensive empirical evidence shows that HashNet can generate exactly binary hash codes and yield state-of-the-art multimedia retrieval performance on standard benchmarks.
Transfer learning is established as an effective technology in computer vision for leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, ...most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.
Two-stream convolutional networks have shown strong performance in video action recognition tasks. The key idea is to learn spatiotemporal features by fusing convolutional networks spatially and ...temporally. However, it remains unclear how to model the correlations between the spatial and temporal structures at multiple abstraction levels. First, the spatial stream tends to fail if two videos share similar backgrounds. Second, the temporal stream may be fooled if two actions resemble in short snippets, though appear to be distinct in the long term. We propose a novel spatiotemporal pyramid network to fuse the spatial and temporal features in a pyramid structure such that they can reinforce each other. From the architecture perspective, our network constitutes hierarchical fusion strategies which can be trained as a whole using a unified spatiotemporal loss. A series of ablation experiments support the importance of each fusion strategy. From the technical perspective, we introduce the spatiotemporal compact bilinear operator into video analysis tasks. This operator enables efficient training of bilinear fusion operations which can capture full interactions between the spatial and temporal features. Our final network achieves state-of-the-art results on standard video datasets.
For efficiently retrieving nearest neighbors from large-scale multiview data, recently hashing methods are widely investigated, which can substantially improve query speeds. In this paper, we propose ...an effective probability-based semantics-preserving hashing (SePH) method to tackle the problem of cross-view retrieval. Considering the semantic consistency between views, SePH generates one unified hash code for all observed views of any instance. For training, SePH first transforms the given semantic affinities of training data into a probability distribution, and aims to approximate it with another one in Hamming space, via minimizing their Kullback-Leibler divergence. Specifically, the latter probability distribution is derived from all pair-wise Hamming distances between to-be-learnt hash codes of the training data. Then with learnt hash codes, any kind of predictive models like linear ridge regression, logistic regression, or kernel logistic regression, can be learnt as hash functions in each view for projecting the corresponding view-specific features into hash codes. As for out-of-sample extension, given any unseen instance, the learnt hash functions in its observed views can predict view-specific hash codes. Then by deriving or estimating the corresponding output probabilities with respect to the predicted view-specific hash codes, a novel probabilistic approach is further proposed to utilize them for determining a unified hash code. To evaluate the proposed SePH, we conduct extensive experiments on diverse benchmark datasets, and the experimental results demonstrate that SePH is reasonable and effective.
Land surface phenology (LSP) characterizes the timing and greenness of seasonal vegetation growth in satellite pixels and it has been widely used to associate with climate change. However, wildfire, ...causing considerable land surface changes, exerts abrupt changes on the LSP magnitudes and great influences on the LSP long-term trends, which are poorly investigated. This study for the first time conducted a systematic analysis of the wildfire impacts on LSP by investigating 838 forest wildfires occurred from 2002 to 2014 across the western United States. Specifically, we derived three LSP timing metrics that are the start (SOS), end (EOS), and length (LOS) of growing season and two LSP greenness metrics that are seasonal greenness maximum (GMax) and minimum (GMin) from daily time series of 250-m MODIS two-band enhanced vegetation index (EVI2) during 2001–2015. Burned area and burn severity were obtained from the Monitoring Trends in Burn Severity project. The results showed GMax and GMin were decreased at an extent of 0.063 and 0.074 EVI2, respectively. LSP timings presented diverse responses to wildfire occurrences. Absolute abrupt shift of >2 days in SOS appeared in 73% of burned areas with 40% advances and 33% delays, the shift in EOS occurred in 80% of burned areas with 33% advances and 47% delays, and the shift in LOS occurred in 85% of the burned areas with 36% shortening and 49% lengthening. Moreover, the LSP changes were significantly influenced by burn severity with the largest impact on LSP timing at the moderate burn severity and on LSP greenness at the high burn severity. Finally, the phenological trends from 2001 to 2015 differed significantly between burned and unburned reference areas and the trend difference varied with the wildfire occurrence year. Overall, this study demonstrated that wildfires exert complex and diverse impacts on LSP timing and greenness metrics and significantly influence LSP trends associating with climate change. The approach developed in this study provides a prototype to investigate LSP responses to other land disturbances associated with natural processes and human activities on the landscape.
As a developing country, China faces the dual challenges of economic development and environment protection. This study analyzes whether the Chinese government will be able to implement this win-win ...goal and how to achieve it in 2020. The results show that: (1) The annual contribution rates to GDP of comprehensive factors, human capital, fixed capital, fossil energy, and non-fossil energy during 1991–2013 were 33.97%, 14.17%, 15.47%, 14.91%, and 21.47%, respectively. China's economic growth mainly depends on fixed capital investment and the output elasticity of human capital to GDP is the highest. (2) Fossil energy consumption could be reduced by increasing human capital investment based on the substitution elasticity of human capital for fossil energy. Simultaneously, the technological progress of fixed capital and fossil energy is slower than that of non-fossil energy, which is conducive to the dual goals of economic growth and carbon emissions reduction. (3) To reach these goals in 2020, the inputs of human capital, fixed capital, fossil energy, and non-fossil energy must increase by 12.05%, 16.89%, 1.29 times, and 74.67% of the 2013 levels by 2020. (4) This study analyzes the means and conditions for China to meet their goal in terms of these constraints and finds that technological progress is the key to meeting the economic growth and carbon emissions goals in 2020.
Domain transfer learning, which learns a target classifier using labeled data from a different distribution, has shown promising value in knowledge discovery yet still been a challenging problem. ...Most previous works designed adaptive classifiers by exploring two learning strategies independently: distribution adaptation and label propagation. In this paper, we propose a novel transfer learning framework, referred to as Adaptation Regularization based Transfer Learning (ARTL), to model them in a unified way based on the structural risk minimization principle and the regularization theory. Specifically, ARTL learns the adaptive classifier by simultaneously optimizing the structural risk functional, the joint distribution matching between domains, and the manifold consistency underlying marginal distribution. Based on the framework, we propose two novel methods using Regularized Least Squares (RLS) and Support Vector Machines (SVMs), respectively, and use the Representer theorem in reproducing kernel Hilbert space to derive corresponding solutions. Comprehensive experiments verify that ARTL can significantly outperform state-of-the-art learning methods on several public text and image datasets.
Domain Invariant Transfer Kernel Learning Mingsheng Long; Jianmin Wang; Jiaguang Sun ...
IEEE transactions on knowledge and data engineering,
2015-June-1, 2015-6-1, 20150601, Letnik:
27, Številka:
6
Journal Article
Recenzirano
Domain transfer learning generalizes a learning model across training data and testing data with different distributions. A general principle to tackle this problem is reducing the distribution ...difference between training data and testing data such that the generalization error can be bounded. Current methods typically model the sample distributions in input feature space, which depends on nonlinear feature mapping to embody the distribution discrepancy. However, this nonlinear feature space may not be optimal for the kernel-based learning machines. To this end, we propose a transfer kernel learning (TKL) approach to learn a domain-invariant kernel by directly matching source and target distributions in the reproducing kernel Hilbert space (RKHS). Specifically, we design a family of spectral kernels by extrapolating target eigensystem on source samples with Mercer's theorem. The spectral kernel minimizing the approximation error to the ground truth kernel is selected to construct domain-invariant kernel machines. Comprehensive experimental evidence on a large number of text categorization, image classification, and video event recognition datasets verifies the effectiveness and efficiency of the proposed TKL approach over several state-of-the-art methods.