NUK - logo
E-viri
Celotno besedilo
Recenzirano
  • Fuzzy Norm-Explicit Product...
    Jamalifard, Mohammadreza; Andreu-Perez, Javier; Hagras, Hani; Lopez, Luis Martinez

    IEEE transactions on fuzzy systems, 05/2024, Letnik: 32, Številka: 5
    Journal Article

    As data resources grow, providing recommendations that best meet the demands has become a vital requirement in business and life to overcome the information overload problem. However, building a system suggesting relevant recommendations has always been a point of debate. One of the most cost-efficient techniques in terms of producing relevant recommendations at a low complexity is product quantization (PQ). PQ approaches have continued developing in recent years. This system's crucial challenge is improving PQ performance in terms of recall measures without compromising its complexity. This makes the algorithm suitable for problems that require a greater number of potentially relevant items without disregarding others, at high speed and low cost to keep up with traffic. This is the case of online shops where the recommendations for the purpose are important, although customers can be susceptible to scoping other products. A recent approach has been exploiting the notion of norm subvectors encoded in product quantizers. This research proposes a fuzzy approach to perform norm-based PQ. Type-2 fuzzy sets (T2FSs) define the codebook allowing subvectors (T2FSs) to be associated with more than one element of the codebook, and next, its norm calculus is resolved by means of integration. Our method finesses the recall measure up, making the algorithm suitable for problems that require querying at most possible potential relevant items without disregarding others. The proposed approach is tested with three public recommender benchmark datasets and compared against seven PQ approaches for maximum inner-product search. The proposed method outperforms all PQ approaches, such as norm-explicit PQ, PQ, and residual quantization up to +6%, +5%, and +8% by achieving a recall of 94%, 69%, and 59% in Netflix, Audio, and Cifar60k datasets, respectively. Moreover, computing time and complexity nearly equal those of the most computationally efficient existing PQ method in the state of the art.