We show that propensity score matching (PSM), an enormously popular method of preprocessing data for causal inference, often accomplishes the opposite of its intended goal—thus increasing imbalance, ...inefficiency, model dependence, and bias. The weakness of PSM comes from its attempts to approximate a completely randomized experiment, rather than, as with other matching methods, a more efficient fully blocked randomized experiment. PSM is thus uniquely blind to the often large portion of imbalance that can be eliminated by approximating full blocking with other matching methods. Moreover, in data balanced enough to approximate complete randomization, either to begin with or after pruning some observations, PSM approximates random matching which, we show, increases imbalance even relative to the original data. Although these results suggest researchers replace PSM with one of the other available matching methods, propensity scores have other productive uses.
Most of the two-sided matching literature maintains the assumption that agents are never indifferent between any two members of the opposite side. In practice, however, ties in preferences arise ...naturally and are widespread. Market design needs to handle ties carefully, because in the presence of indifferences, stability no longer implies Pareto efficiency, and the deferred acceptance algorithm cannot be applied to produce a Pareto efficient or a worker-optimal stable matching.
We allow ties in preference rankings and show that the Pareto dominance relation on stable matchings can be captured by two simple operations which involve rematching of workers and firms via cycles or chains. Likewise, the Pareto relation defined via workers' welfare can also be broken down to two similar procedures which preserve stability. Using these structural results we design fast algorithms to compute a Pareto efficient and stable matching, and a worker-optimal stable matching.
As a fundamental and critical task in various visual applications, image matching can identify then correspond the same or similar structure/content from two or more images. Over the past decades, ...growing amount and diversity of methods have been proposed for image matching, particularly with the development of deep learning techniques over the recent years. However, it may leave several open questions about which method would be a suitable choice for specific applications with respect to different scenarios and task requirements and how to design better image matching methods with superior performance in accuracy, robustness and efficiency. This encourages us to conduct a comprehensive and systematic review and analysis for those classical and latest techniques. Following the feature-based image matching pipeline, we first introduce feature detection, description, and matching techniques from handcrafted methods to trainable ones and provide an analysis of the development of these methods in theory and practice. Secondly, we briefly introduce several typical image matching-based applications for a comprehensive understanding of the significance of image matching. In addition, we also provide a comprehensive and objective comparison of these classical and latest techniques through extensive experiments on representative datasets. Finally, we conclude with the current status of image matching technologies and deliver insightful discussions and prospects for future works. This survey can serve as a reference for (but not limited to) researchers and engineers in image matching and related fields.
Traditional feature matching methods, such as scale-invariant feature transform (SIFT), usually use image intensity or gradient information to detect and describe feature points; however, both ...intensity and gradient are sensitive to nonlinear radiation distortions (NRD). To solve this problem, this paper proposes a novel feature matching algorithm that is robust to large NRD. The proposed method is called radiation-variation insensitive feature transform (RIFT). There are three main contributions in RIFT. First, RIFT uses phase congruency (PC) instead of image intensity for feature point detection. RIFT considers both the number and repeatability of feature points and detects both corner points and edge points on the PC map. Second, RIFT originally proposes a maximum index map (MIM) for feature description. The MIM is constructed from the log-Gabor convolution sequence and is much more robust to NRD than traditional gradient map. Thus, RIFT not only largely improves the stability of feature detection but also overcomes the limitation of gradient information for feature description. Third, RIFT analyses the inherent influence of rotations on the values of the MIM and realises rotation invariance. We use six different types of multi-modal image datasets to evaluate RIFT, including optical-optical, infrared-optical, synthetic aperture radar (SAR)-optical, depth-optical, map-optical, and day-night datasets. Experimental results show that RIFT is superior to SIFT and SAR-SIFT on multi-modal images. To the best of our knowledge, RIFT is the first feature matching algorithm that can achieve good performance on all the abovementioned types of multi-modal images. The source code of RIFT and the multi-modal image datasets are publicly available.
We consider two-sided matching problems where agents on one side of the market (hospitals) are required to satisfy certain distributional constraints. We show that when the preferences and ...constraints of the hospitals can be represented by an M♮-concave function, (i) the generalized Deferred Acceptance (DA) mechanism is strategyproof for doctors, (ii) it produces the doctor-optimal stable matching, and (iii) its time complexity is proportional to the square of the number of possible contracts. Furthermore, we provide sufficient conditions under which the generalized DA mechanism satisfies these desirable properties. These conditions are applicable to various existing works and enable new applications as well, thereby providing a recipe for developing desirable mechanisms in practice.
In the school choice market, where scarce public school seats are assigned to students, a key operational issue is how to reassign seats that are vacated after an initial round of centralized ...assignment. Practical solutions to the reassignment problem must be simple to implement, truthful, and efficient while also alleviating costly student movement between schools. We propose and axiomatically justify a class of reassignment mechanisms, the permuted lottery deferred acceptance (PLDA) mechanisms. Our mechanisms generalize the commonly used deferred acceptance (DA) school choice mechanism to a two-round setting and retain its desirable incentive and efficiency properties. School choice systems typically run DA with a lottery number assigned to each student to break ties in school priorities. We show that under natural conditions on demand, the second-round tie-breaking lottery can be correlated arbitrarily with that of the first round without affecting allocative welfare and that
reversing
the lottery order between rounds minimizes reassignment among all PLDA mechanisms. Empirical investigations based on data from New York City high school admissions support our theoretical findings.
This paper was accepted by Gad Allon, operations management.
Market thickness is a key parameter that can make or break a platform’s business model. Thicker markets can offer more opportunities for participants to meet and higher chances that a potential match ...exists. However, they can also be vulnerable to potential search frictions. In this paper, using data from an online peer-to-peer holiday property rental platform, we aim to identify and measure the causal impact of market thickness on matching rates. In particular, we exploit an exogenous shock to market size caused by a one-time migration of listings from other platforms, which gives rise to a quasiexperimental design. We find that increased market thickness actually leads to lower matching rates. Keeping search technology and other factors constant, doubling market size leads to a 15.4% reduction in traveler confirmation rate and a 15.9% reduction in host occupancy rate. As a result, the platform lost 5.6% of potential matches because of the increased market size. We attribute the effect to increased search friction: travelers’ search intensity increases by 18.3% when market size doubles. This effect is especially prominent when the matching needs to take place within a limited time. Our results offer insights for future empirical and theoretical research on matching markets. They also highlight that is important for platform owners to watch out for increased search frictions as markets grow and invest in search technologies to facilitate more efficient search.
This paper was accepted by Charles Corbett, operations management.
Many two-sided matching markets, from labor markets to school choice programs, use a clearinghouse based on the applicant-proposing deferred acceptance algorithm, which is well known to be ...strategy-proof for the applicants. Nonetheless, a growing amount of empirical evidence reveals that applicants misrepresent their preferences when this mechanism is used. This paper shows that no mechanism that implements a stable matching is obviously strategy-proof for any side of the market, a stronger incentive property than strategy-proofness that was introduced by Li (2017). A stable mechanism that is obviously strategy-proof for applicants is introduced for the case in which agents on the other side have acyclical preferences.
We study dynamic matching in an infinite-horizon stochastic market. Although all agents are potentially compatible with each other, some are hard to match and others are easy to match. Agents prefer ...to be matched as soon as possible, and matches are formed either bilaterally or indirectly through chains. We adopt an asymptotic approach and compute tight bounds on the limit of waiting time of agents under myopic policies that differ in matching technology and prioritization. We find that when hard-to-match agents arrive less frequently than easy-to-match ones, (i) bilateral matching is almost as efficient as chains (waiting times scale similarly under both, though chains always outperform bilateral matching by a constant factor), and (ii) assigning priorities to hard-to-match agents improves their waiting times. When hard-to-match agents arrive more frequently, chains are much more efficient than bilateral matching, and prioritization has no impact. Furthermore, somewhat surprisingly, we find that in a heterogeneous market and under bilateral matching, increasing the arrival rate of hard-to-match agents has a nonmonotone effect on waiting times. This behavior is in contrast with that of a homogeneous dynamic market, where increasing arrival rate always improves waiting time, and it highlights fundamental differences between heterogeneous and homogeneous dynamic markets.