[null,null,["最后更新时间 (UTC):2024-07-26。"],[[["\u003cp\u003eThis recommendation system automatically learns and doesn't require prior domain knowledge, enabling serendipitous discoveries of new user interests.\u003c/p\u003e\n"],["\u003cp\u003eIt can be easily implemented as a starting point using only a feedback matrix, without needing contextual features, and can act as one candidate generator among others.\u003c/p\u003e\n"],["\u003cp\u003eThe system faces challenges with recommending new or unseen items (cold-start problem) and incorporating side features like user demographics or item attributes.\u003c/p\u003e\n"],["\u003cp\u003eWhile the cold-start problem can be partially mitigated by techniques like projection in WALS or heuristic embeddings, integrating side features requires more complex model adaptations.\u003c/p\u003e\n"]]],[],null,["# Collaborative filtering advantages & disadvantages\n\nAdvantages\n----------\n\n**No domain knowledge necessary**\nWe don't need domain knowledge because the embeddings are automatically learned.\n\n**Serendipity**\nThe model can help users discover new interests. In isolation, the ML system may not know the user is interested in a given item, but the model might still recommend it because similar users are interested in that item.\n\n**Great starting point**\n\nTo some extent, the system needs only the feedback matrix to train a matrix\nfactorization model. In particular, the system doesn't need contextual features.\nIn practice, this can be used as one of multiple candidate generators.\n\nDisadvantages\n-------------\n\n**Cannot handle fresh items**\n\nThe prediction of the model for a given (user, item) pair is the dot\nproduct of the corresponding embeddings. So, if an item is not seen\nduring training, the system can't create an embedding for it and can't\nquery the model with this item. This issue is often called the\n**cold-start problem**. However, the following techniques can address\nthe cold-start problem to some extent:\n\n- **Projection in WALS.** Given a new item \\\\(i_0\\\\) not seen in training,\n if the system has a few interactions with users, then the system can\n easily compute an embedding \\\\(v_{i_0}\\\\) for this item without having\n to retrain the whole model. The system simply has to solve the following\n equation or the weighted version:\n\n \\\\\\[\\\\min_{v_{i_0} \\\\in \\\\mathbb R\\^d} \\\\\\|A_{i_0} - U v_{i_0}\\\\\\|\\\\\\]\n\n The preceding equation corresponds to one iteration in WALS: the\n user embeddings are kept fixed, and the system solves for the embedding\n of item \\\\(i_0\\\\). The same can be done for a new user.\n- **Heuristics to generate embeddings of fresh items.** If the system\n does not have interactions, the system can approximate its embedding\n by averaging the embeddings of items from the same category, from the\n same uploader (in YouTube), and so on.\n\n**Hard to include side features for query/item**\n\n**Side features** are any features beyond the query or item ID. For movie\nrecommendations, the side features might include country or age. Including\navailable side features improves the quality of the model. Although\nit may not be easy to include side features in WALS,\n\na generalization of WALS makes this possible.\n\n\nTo generalize WALS, **augment the input matrix with features** by defining a\nblock matrix \\\\(\\\\bar A\\\\), where:\n\n\u003cbr /\u003e\n\n- Block (0, 0) is the original feedback matrix \\\\(A\\\\).\n- Block (0, 1) is a multi-hot encoding of the user features.\n- Block (1, 0) is a multi-hot encoding of the item features.\n\n\u003cbr /\u003e\n\n| **Note:** Block (1, 1) is typically left empty. If you apply matrix factorization to \\\\(\\\\bar A\\\\), then the system learns embeddings for side features, in addition to user and item embeddings."]]