Title | : | Learning and explaining Pairwise Comparisons Models |
Speaker | : | Elisha Parhi (IITM) |
Details | : | Thu, 28 Nov, 2024 11:00 AM @ SSB 334 |
Abstract: | : | We consider the problem of learning to predict outcomes of unseen pairwise comparisons over a set of items when a small set of pairwise comparisons are available. When the underlying preferences are intransitive in nature, which is common occurrence in real world data sets, this becomes a challenging problem both in terms of modeling and learning. Towards this, we introduce a flexible and natural parametric model for pairwise comparisons that we call the Distinguishing Feature Model} (DF). Under this model, the items have an unknown but fixed embedding and the pairwise comparison between a pair of items depends probabilistically on the feature in the embedding that can best distinguish the items. The proposed DF model generalizes the popular transitive Bradley-Terry-Luce model and with embedding dimension as low as 3, can capture arbitrarily long cyclic dependencies. Furthermore, we explicitly show the type of preference relations that cannot be modelled under the DF model for d=3. On the algorithmic side, we propose a Siamese style neural network architecture which can be used to predict well under the DF model while at the same time being interpretable in the sense that the embeddings learnt can be extracted directly from the learnt model. Our experimental results show that the model is either comparable or outperforms standard baselines in both synthetic and real world data-sets. Next, we consider the problem of explaining pairwise comparisons using shapley values. Recent work has proposed Pref-Shap, a novel extension of shapley values to preferences. We demonstrate that Pref-shap might not produce reasonable explanations for all skew symmetric functions. We propose a novel model for learning explanations for pairwise comparisons models. Finally, we look at a novel model for pairwise comparisons termed the 'parametric BTL model'. Our initial work suggests that the model is flexible and can model intransitive relations well. Future work includes studying the model to understand its representation capabilities. |