This study investigates the influence of managerial incentives to meet or beat the zero earnings benchmark on labor cost behavior of private Belgian firms. We posit that relative to managers of firms ...reporting healthy profits, managers meeting or beating the zero earnings benchmark will increase labor costs to a smaller extent when activity increases and decrease labor costs to a larger extent when activity decreases. This should take the form of more symmetric labor cost behavior for firms that report a small profit. Our findings are consistent with this prediction. Using detailed employee data, we show that managers of firms reporting a small profit focus on firing employees who are relatively low cost to fire. To protect their reputation in the labor market, managers of other firms, particularly those reporting healthy profits, limit the numbers of dismissals and react to activity changes by changing the number of hours that employees work.
Salient object detection (SOD) is a crucial task in the field of remote sensing image (RSI) processing. Weakly supervised SOD methods, which generate saliency maps by classification convolutional ...neural networks (CNNs), considerably reduce labor costs. However, due to the complexity of remote sensing scenes, concerns remain about weakly supervised SOD for RSIs: 1) since the pooling operations are applied in the classification CNNs, the boundary maintenance of weakly supervised methods is unsatisfactory and 2) several sophisticated postprocessing procedures are used in previous weakly supervised methods, which are inevitably time-consuming. To solve these problems, we combine the benefits of weakly and fully supervised learning and propose a new SOD method named progressively supervised learning (PSL) for RSIs. The proposed method realizes end-to-end SOD with a lightweight model under imagewise annotations. First, to reduce the demands on large-scale pixelwise annotations, we propose a pseudo-label generation method based on a classification network and gradient-weighted class activation mapping (Grad-CAM) to compute pseudo saliency maps (PSMs) for training samples and auxiliary images in a weakly supervised manner. Then, to improve the computational efficiency, we construct a feedback saliency analysis network (FSAN), where the generated PSMs are regarded as pixelwise labels. Finally, inspired by curriculum learning, we design a new denoising loss function to further reduce the effect brought by missing judgment in PSMs and enhance the detection accuracy. Comprehensive evaluations with two remote sensing data sets and a comparison with 11 methods validate the superiority of the proposed PSL model.
This article focused on quantifying the company’s total labor costs in percent. Moreover, it aimed to show the labor costs on a construction contract both from the terms of the company’s labor costs ...of its own employees and the subcontractors’ labor costs. Have been accurately quantified the total labor costs in the company from the profit and loss statement of the selected construction company, on which the case study was based, shows that the average percentage representation of labor costs, which include wage costs and social and health insurance costs, is 15.30%. However, when all the costs associated with employees as a labor force are identified in detail, it is evident that the total labor costs represent 31.82% of the sales remuneration. It results in the doubled value.
We analyze a large, detailed operational data set from a restaurant chain to shed new light on how workload (defined as the number of tables or diners that a server simultaneously handles) affects ...servers' performance (measured as sales and meal duration). We use an exogenous shock-the implementation of labor scheduling software-and time-lagged instrumental variables to disentangle the endogeneity between demand and supply in this setting. We show that servers strive to maximize sales and speed efforts simultaneously, depending on the relative values of sales and speed. As a result, we find that, when the overall workload is small, servers expend more and more sales efforts with the increase in workload at a cost of slower service speed. However, above a certain workload threshold, servers start to reduce their sales efforts and work more promptly with the further rise in workload. In the focal restaurant chain, we find that this saturation point is currently not reached and, counterintuitively, the chain can reduce the staffing level and achieve both significantly higher sales (an estimated 3% increase) and lower labor costs (an estimated 17% decrease).
This paper was accepted by Noah Gans, special issue on business analytics
.
This manuscript presents a framework to develop vector error correction (VEC) models applicable to forecasting the short- and long-run movements of the average hourly earnings of construction labor, ...which is an essential predictor of the construction labor costs. These models characterize the relationship between average hourly earnings and a set of explanatory variables. The framework is applied to develop VEC forecasting models for the average hourly earnings of construction labor in the USA based on the identified variables that govern its movements, such as Global Energy Price Index, Gross Domestic Product, and Personal Consumption Expenditures. More than 150 candidate VEC models were created, of which 25 passed the diagnostics. The most appropriate model was then identified by comparing the prediction performance of these models when applied to the forecasting average hourly earnings over 36-months. The proposed framework and the ensuing models address the need for appropriate models that can forecast the short- and long-run movements of the labor costs. Practitioners can use the proposed framework to develop much-needed forecast models and estimate construction labor costs of the various projects. The insights derived from the development and applications of these models can enhance the chances of project success.
The mapping of urban areas at regional to global scales is a crucial task due to its value for environmental monitoring, habitat and biodiversity conservation, and decision-making. In most current ...applications, two techniques (i.e., supervised classification and data fusion) are widely applied in large-scale urban mapping. However, the costly training sample collection, inadequate data-source descriptions, and diverse urban characteristics (e.g., shape, size, socioeconomic status, and physical environment) are challenging problems for the urban mapping approaches. In this context, aiming at effectively deriving accurate urban areas at a large scale, we propose a novel ensemble support vector machine (SVM) method which consists of three steps: 1) the automatic generation of training data to reduce labor costs; 2) the construction of an ensemble SVM model to effectively combine the multisource data (including remote sensing and socioeconomic data); and 3) an adaptive patch-based thresholding technique to tackle the diverse urban characteristics. The proposed method is employed to map urban areas of China in 2005 and 2010, and the resulting maps are compared with the existing urban maps for 287 prefecture-level cities. It is found that our results present a satisfactory superiority, especially in challenging small cities, with a significant improvement in median Kappa (0.174 for 2005 and 0.203 for 2010). When incorporating moderate-resolution imaging spectroradiometer multispectral data as an additional source, the Kappa coefficient can be further raised by 0.028 for 2010. In general, the proposed method shows great potential for accurately mapping urban areas at regional, continental, or even global scales in a cost-effective manner.
The weakly supervised semantic segmentation (WSSS) method aims to assign semantic labels to each image pixel from weak (image-level) instead of strong (pixel-level) labels, which can greatly reduce ...human labor costs. However, there are some problems in WSSS of remote sensing images such as how to locate labels accurately, and how to get precise segmentation edges. To address these issues, we propose a novel framework directly transferring the scene classification model to perform semantic segmentation. We first train a multi-label scene classification network as the encoder to obtain the pre-trained model, then the feature learned by the model is transferred to the decoder. Different from other methods, we propose a saliency map generator instead of the Class Activation Map for more accurate location information by making pixels belonging to the same class lie close together while different classes are separated in feature space. Meanwhile, we take the superpixel patch as processing unit to provide precise boundary inhibition for the saliency map. To assign semantic labels for each patch, combined with extracted salient region, we propose a module responsible for exploiting the consistency of spatial and semantic similarity between different patches. Finally, we incorporate the above two modules to supervise the training process of the decoder without generating pseudo labels as most methods do, thus simplifying the training process. Experimental results show that our method outperforms other weakly supervised approaches on DLRSD and WHDLD datasets with at least a 3% improvement on mean intersection over union.
This article reports on the results of 69 individual qualitative interviews in Cuenca, Ecuador, conducted with lifestyle migrants in 2011, 2012 and 2013. Many of the North American migrants ...interviewed are in Ecuador for economic reasons, a motivation that has been under-theorised in lifestyle migration literature. The paper develops the concept of geographic arbitrage to explore the motivations and strategies of migrants in a context of structural inequalities and geographic differentiation in labour costs. Geographic arbitrage consists of relocating day-to-day expenses to low-cost locations, a strategy that is perhaps of increasing importance in North American, given the lack of retirement security there. The paper argues that the strategy of geographic arbitrage of North Americans to Cuenca is framed by powerful players in the field of international lifestyle marketing and by the socio-economic context of the migrants themselves.
Semantic segmentation of building facade point clouds has diverse applications. The development of semantic segmentation methods is inextricably linked to datasets. The available building facade ...datasets suffer from a lack of abundant semantic categories and data completeness. To compensate for these shortcomings, we propose a new building facade dataset characterized by various categories and relatively complete 3D building facades. In addition, most existing methods focus on fully supervised learning, which relies on manually labeling large-scale point cloud data and results in high time and labor costs. In this paper, we propose an effective weakly supervised building facade segmentation approach, called spatial adaptive fusion consistency contrastive constraint (SAF-C3), to solve the above problem. We first design a multi-random point cloud augmentor as an auxiliary supervision branch to enhance the learning ability of the original network branch. Then, we present a spatial adaptive fusion (SAF) module to extract discriminative features for building facade point clouds. Finally, we propose a spatial consistency contrastive constraint to explore the contrastive property in feature space and to ensure the predictive consistency among the augmentation and original branches. The proposed method achieves a significant performance improvement against the state-of-the-art methods on two building facade point cloud datasets through extensive experiments. In particular, the performance of SAF-C3 with 1% labels significantly surpasses the baseline network with 100% labels.
On the production cost of lignocellulose‐degrading enzymes Ferreira, Rafael G; Azzoni, Adriano R; Freitas, Sindelia
Biofuels, bioproducts and biorefining,
January/February 2021, 2021-01-00, 20210101, Letnik:
15, Številka:
1
Journal Article