ML Research

AAAI 2024 Recap: Future Visions of Recommendation Ecosystems

A recap of some key discussion topics from the Recommendations Ecosystem Workshop at AAAI 2024 in Vancouver.

AAAI 2024 Recap: Future Visions of Recommendation Ecosystems

Last week, at AAAI 2024 in Vancouver, Arthur presented recent joint work with Morgan Stanley at the Recommendations Ecosystem Workshop focused on Modeling, Optimization, and Incentive Design. 

Most discussions focused on how to rethink online content ecosystems under new lenses: in the era of generative AI, through reimagining the power dynamics of the ecosystems, and through understanding disparate effects of recommendation systems (recsys). Here were some of the key discussions from the exciting day:

GenAI in Content Creation

Professor Haifeng Xu discussed how the introduction of generative AI in content creation may affect the competitive landscape between human creators and GenAI creators. Generative AI offers an unprecedented, automated route for creating digital content. This presents concerns around potential competition between artificially generated content and authentic human content. Prof. Xu’s research examines the existential question of whether GenAI will drive humans out of the ecosystem. Through a game theoretic framework, he presents a fortunately positive answer—that a type of “symbiosis” can exist between GenAI content and authentic human content, in other words, that authentic creators may sacrifice a little, but not too much. Of course, there are critiques and limitations in the presented work—the system assumes all content is viewed and favored equally—that user preferences will be similar between GenAI and authentic content. 

Democratizing Recommendation Ecosystems

Professor Robin Burke discussed paths toward democratizing recommendation ecosystems. Loosely, democratizing technology often refers to making technology more accessible to more people. Recommendation systems currently operate in a centralized model, with the majority of the power lying in the large company which operates the platform. Prof. Burke reimagines these underlying assumptions—what if we reconsider how recsys are governed and imagine the creators as the first class citizens? What if we valued their needs as just as important as the consumers and advertisers? What if we valued what creators inherently know—they understand their own content, they understand their audiences, their own creative practice, their career trajectory. This information is potentially useful to the recommender system but is currently being ignored. To fully leverage the benefits of this system, a mechanism of community governance must be employed—and for community governance to be successful, the system must be simple, have data consent, be flexible, and be transparent. 

Effects on Strategic Users

Professor Chara Podimata examined the disparate effects of recommendation systems to strategic users. Recommender systems operate as a feedback loop. Model developers often assume that users are not aware of how this loop works, when indeed users are aware of this and often act in accordance with their mental models of the feedback loop. (Have you ever engaged in purposeful behavior to explicitly curate your feed in a social media app?) Specifically, Prof. Podimata presented the results of a survey on user consumption patterns on TikTok in which 60% of surveyors took some type of action to curate their feed. The work also showed disparate effects between recommendation results to users who had popular interests versus those who had niche interests.

This plays to the joint work that Arthur presented with Morgan Stanley on group item fairness. As technological tools for presenting items to users, they are subject to many fairness considerations for users and items alike. We propose a model-based post-processing schema for group-wise item fairness. 

What will the future of recommender ecosystems look like? We at Arthur are excited to be a part of shaping it.